@w0bb1t If I was doing this eleven years ago, I would have used a small character level RNN language model trained on Samuel Butler's Erewhon. But I didn't do that exactly (https://github.com/douglasbagnall/recur/blob/master/text-predict.c).
A small character level RNN model will be more space efficient that a word level ngram Markov model, and it's output more interesting , though just as nonsensical.
Whether this poisoning approach has any effect is another matter (given the web is already just SEO bilgewater and the LLMs are poisoning each other anyway).
=> More informations about this toot | View the thread | More toots from dbagnall@tldr.nettime.org
=> View w0bb1t@tldr.nettime.org profile
text/gemini
This content has been proxied by September (3851b).