My latest article on unlawful training of AI models and what we should do about it just dropped on TheRegister today:
https://www.theregister.com/2024/12/22/ai_poisoned_tree/
[#]ethics #ai #genai #llm #trainingdata #privacy #dataprotection #copyright #commons #law
=> More informations about this toot | More toots from thatprivacyguy@eupolicy.social
@thatprivacyguy
The models, open source or not, are useless when you don't own a massive data center. So I would say delete them to avoid further further environmental harm.
=> More informations about this toot | More toots from winfriedtilanus@mastodon.nl
@winfriedtilanus that isn't true - running the models is not as resource intensive as training them.
I run many LLMs on my own systems at home (not even specialised AI systems and tech which is 4+ generations old) I even run stable diffusion models on my phone.
And open source communities have historically filled the gap by providing hosting solutions - i expect we would see the same happen here (or similar solution to Folding@Home using p2p or distributed system models).
=> More informations about this toot | More toots from thatprivacyguy@eupolicy.social
@thatprivacyguy
Fair, the smaller models can certainly be run on a simple laptop. But I see wild estimates of the costs of a single prompt of the larger models, ranging from cents per prompt (still wild) to thousands of euros for a 'brute force mode'. Probably needs more investigation...
=> More informations about this toot | More toots from winfriedtilanus@mastodon.nl This content has been proxied by September (3851b).Proxy Information
text/gemini