Ancestors

Toot

Written by Anil Dash on 2024-12-11 at 20:05

It is such a stupid and obvious market failure that nobody has made a consumer AI LLM product that is 1. trained on consensually-acquired material 2. powered with renewable energy 3. genuinely open about its weights and models. Just achieving these things and being creator-friendly would be massive.

=> More informations about this toot | More toots from anildash@me.dm

Descendants

Written by aburtch on 2024-12-11 at 20:08

@anildash Especially one that you could then train on your own data. Just the "run on renewable energy" alone would make it huge.

=> More informations about this toot | More toots from aburtch@triangletoot.party

Written by Waldo Jaquith on 2024-12-11 at 20:08

@anildash I'd hoped that Apple was doing this, when they signed that licensing deal with the New York Times. Perhaps they'll do it yet.

=> More informations about this toot | More toots from waldoj@mastodon.social

Written by Joe on 2024-12-11 at 20:09

@anildash Sorry, #2 doesn't help if vast amounts of energy are required. The reason is that at any given time, there's only so much renewable energy available, and if a wealthy company buys it up for AI use, the electricity grid will have to burn more fossil fuel to keep up with total demand. That means that lower power approaches are urgently needed.

=> More informations about this toot | More toots from not2b@sfba.social

Written by Kim Spence-Jones 🇬🇧😷 on 2024-12-12 at 02:22

@not2b @anildash

We need to define “runs on renewable energy” to mean that you have to finance and install your own renewable energy infrastructure, and export enough energy to offset the times you have to consume from the grid.

=> More informations about this toot | More toots from KimSJ@mastodon.social

Written by mcc on 2024-12-11 at 20:10

@anildash i was actually thinking, as an art project, of getting a solar panel and doing this with a collection of CC0 content. i decided not to do pursue this after seeing how things developed with OpenAI, on the grounds that if a true-open model existed, the proponents of closed/stolen models would point to my open model to go "see? AI doesn't have to be based on stolen content!" then continue using the stolen content.

=> More informations about this toot | More toots from mcc@mastodon.social

Written by mcc on 2024-12-11 at 20:14

@anildash Put a different way, I think one reason this doesn't exist is that the presence of stolen material in LLM models is not a flaw, but the primary attraction. Copyright laundering is the core product.

If the users did not want to do copyright laundering, then the product might not even need the machine learning model at all, in that world a simple tag system might be adequate. The purpose the model serves in the system is to randomize the inputs enough to disguise the sources.

=> More informations about this toot | More toots from mcc@mastodon.social

Written by Dr Kim Foale on 2024-12-11 at 20:40

@mcc @anildash i think its very easy to argue that people who licensed their work under CC0 (or any CC for that matter) did not actively consent to having their work used to train LLMs, a technology that didn't exist (at least in the mainstream) at the time of licensing.

=> More informations about this toot | More toots from kim@social.gfsc.studio

Written by mcc on 2024-12-11 at 20:41

@kim @anildash i don't think this is a serious argument.

=> More informations about this toot | More toots from mcc@mastodon.social

Written by Dr Kim Foale on 2024-12-11 at 20:42

@mcc @anildash excuse me?

=> More informations about this toot | More toots from kim@social.gfsc.studio

Written by mcc on 2024-12-11 at 20:44

@kim @anildash With regard to CC0, I mean. CC0 is categorically different from the others.

=> More informations about this toot | More toots from mcc@mastodon.social

Written by django on 2024-12-12 at 03:56

@kim @mcc @anildash doesn’t pre-OpenAI produced content licensed CC0 assume that another person would be remixing or using it?

Iirc Creative Commons looked at adding llm exclusion clauses, but ultimately decided against since they didn’t think it would be enforceable

=> More informations about this toot | More toots from django@social.coop

Written by your auntifa liza 🇵🇷 🦛 🦦 on 2024-12-11 at 20:47

@mcc @anildash it’s why i call it plagiarism-as-a-service. because it is plagiarism.

=> More informations about this toot | More toots from blogdiva@mastodon.social

Written by Cassandra Granade 🏳️‍⚧️ on 2024-12-11 at 20:53

@mcc @anildash As an absolute layperson, it appears that there's this weird legal situation where if you cause harm to so very many people that it's impossible to tell exactly who is hurt by your actions, you basically get away with it because no one can prove in court that they in particular were hurt.

LLMs appear to be popular with capital owners largely on the basis that they can efficiently exploit this hack.

=> More informations about this toot | More toots from xgranade@wandering.shop

Written by Princess Unikitty on 2024-12-11 at 20:54

@xgranade @mcc @anildash

Pollution is like this: Very hard to prove exactly which toxic molecule made you sick and what factory it came from.

=> More informations about this toot | More toots from unikitty@kolektiva.social

Written by naught101 on 2024-12-11 at 22:49

@unikitty @xgranade @mcc @anildash

And the health insurance industry (in the US)

=> More informations about this toot | More toots from naught101@mastodon.social

Written by Zen Heathen on 2024-12-11 at 21:25

@mcc They should be called Plagiarism Machines. That's all they are--they steal everyone else's words, and restate them just enough to try not to get sued.

@anildash

=> More informations about this toot | More toots from zenheathen@mstdn.ca

Written by Anil Dash on 2024-12-12 at 01:37

@zenheathen @mcc I have a local one I trained on my own words so I can make art things for myself. It’s valid.

=> More informations about this toot | More toots from anildash@me.dm

Written by Chris Lilley 🏴󠁧󠁢󠁳󠁣󠁴󠁿 on 2024-12-12 at 04:25

@anildash @zenheathen @mcc Yes. This is informed consent.

=> More informations about this toot | More toots from svgeesus@mastodon.scot

Written by Nelson Minar on 2024-12-12 at 05:53

@anildash I would love to learn about the technical details

=> More informations about this toot | More toots from nelson@tech.lgbt

Written by bruce oberg on 2024-12-12 at 07:30

@nelson @anildash ditto

=> More informations about this toot | More toots from bruceoberg@xoxo.zone

Written by Anil Dash on 2024-12-12 at 01:35

@mcc I think about this a lot. The “then they’ll use it to justify the bad thing”. But they do that anyway, and we end up without the ethical thing. Like… we’re on Mastodon. You know who literally forked it to make a fascist network. They would have done that anyway! But this is still a thing of value.

=> More informations about this toot | More toots from anildash@me.dm

Written by @990000@mstdn.social on 2024-12-11 at 20:16

@anildash probably because they know it's simply is not tenable when applied to generic search.

All the more ethical ones seem limited in scope and breadth from what I've noticed, such as in academia and science, using smaller, more focused data sets.

All the ones that are too ambitious in scope are all bad because they're targeting everyone, and for any kind of use case. I think it's basically greed.

=> More informations about this toot | More toots from 990000@mstdn.social

Written by Xironimous Wu on 2024-12-11 at 20:55

@anildash Not many would consent unless they could get a share of the profits. That requires reducing profts of shareholders, and attribution (which is a hard problem for LLMs).

Data centres use enormous amounts of power and water. Both are limited resources so they'll interfere with decarbonisation elsewhere.

The profit model is based on taking jobs away from people. So consensual training and renewable power won't make it ethical, or stop them from being glorified word salad generators.

=> More informations about this toot | More toots from xironwu@mastodon.social

Written by Justin Fitzsimmons on 2024-12-11 at 21:49

@anildash the corporate models that are spending billions of dollars to harness a country's worth of power and boils the oceans to train on unethically sourced data results in a service that isn't appropriate for deployment as anything more sophisticated than a toy (despite how people are actually using them). An "organic" version would perform even worse than the corpo ones that are already unpopular and failing.

=> More informations about this toot | More toots from smn@l3ib.org

Written by Anil Dash on 2024-12-12 at 01:36

@smn I don’t believe that’s true. I believe it might enable purpose-specific smaller models that are useful.

=> More informations about this toot | More toots from anildash@me.dm

Written by on 2024-12-11 at 22:04

@anildash Can they be open about their weights and models? My (limited) understanding of LLMs is that the models are iterated over kind of genetically and no one actually understands the specifics of a given model.

=> More informations about this toot | More toots from smitty@dice.camp

Written by B O on 2024-12-11 at 22:33

@anildash if that model existed, it’d be worse than GPT-1, you need insane amount of data for good performance

=> More informations about this toot | More toots from piyuv@techhub.social

Written by Anil Dash on 2024-12-12 at 01:35

@piyuv depends what it’s for. I’m not sure that’s correct.

=> More informations about this toot | More toots from anildash@me.dm

Written by kris on 2024-12-12 at 00:07

@anildash there's some related work here, https://www.fairlytrained.org/certified-models

they certify models trained on consensual data

=> More informations about this toot | More toots from kris@ghostly.garden

Written by Len on 2024-12-12 at 00:33

@anildash I was just thinking the other day that it would be nice for text-based games if they had a LLM-driven way to generate a picture of your character from its text description of all the items you're carrying, only with the LLM trained on works produced by the game's own development team.

(Just idle speculation, I have no idea of the technical feasibility or power demands)

=> More informations about this toot | More toots from Hyperlynx@aus.social

Written by Axomamma on 2024-12-12 at 00:58

@anildash I'm sorry, AI blows chunks except in very, very narrow use categories - NONE of which include search engines used by the general public. AI is an information wrecking ball.

=> More informations about this toot | More toots from Axomamma@mastodon.online

Written by Anil Dash on 2024-12-12 at 01:33

@Axomamma I’m not suggesting a search engine?

=> More informations about this toot | More toots from anildash@me.dm

Written by Axomamma on 2024-12-12 at 01:55

@anildash I hate AI with heat of a thousand suns. It comes up as an issue for me primarily with searches. Does it matter whether you suggested search engines? It's a blight.

=> More informations about this toot | More toots from Axomamma@mastodon.online

Written by Anil Dash on 2024-12-12 at 04:52

@Axomamma you should try arguing with whoever is advocating for that, then? like, it's good you're self-aware that you're not engaging in this conversation at an intellectual level.

=> More informations about this toot | More toots from anildash@me.dm

Written by Axomamma on 2024-12-12 at 15:28

@anildash Fuck you.

=> More informations about this toot | More toots from Axomamma@mastodon.online

Written by Jairaj Devadiga on 2024-12-12 at 01:49

@anildash It is not a market failure since there is no demand for such a product.

=> More informations about this toot | More toots from jairajdevadiga@mastodon.social

Written by Christian Kent on 2024-12-12 at 01:56

@anildash I wish this 3-point standard was really famous, like, 2 years ago

=> More informations about this toot | More toots from whophd@ioc.exchange

Written by Mark Newton on 2024-12-12 at 08:24

@anildash With whose money? What’s the business case?

=> More informations about this toot | More toots from NewtonMark@eigenmagic.net

Written by Petra van Cronenburg on 2024-12-12 at 15:54

@anildash Well ... there's one single word that explains all, no, two: greed + profit.

=> More informations about this toot | More toots from NatureMC@mastodon.online

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113635988509896606
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
486.380174 milliseconds
Gemini-to-HTML Time
15.566587 milliseconds

This content has been proxied by September (ba2dc).