It’s almost completely ineffective, sorry. It’s certainly not as effective as exfiltrating weights via neighborly means.
On Glaze and Nightshade, my prior rant hasn’t yet been invalidated and there’s no upcoming mathematics which tilt the scales in favor of anti-training techniques. In general, scrapers for training sets are now augmented with alignment models, which test inputs to see how well the tags line up; your example might be rejected as insufficiently normal-cat-like.
I think that “force-feeding” is probably not the right metaphor. At scale, more effort goes into cleaning and tagging than into scraping; most of that “forced” input is destined to be discarded or retagged.
=> More informations about this toot | View the thread | More toots from corbin@awful.systems
=> View BlueMonday1984@awful.systems profile
text/gemini
This content has been proxied by September (3851b).