Toot

Written by dbagnall on 2025-01-23 at 10:04

@w0bb1t If I was doing this eleven years ago, I would have used a small character level RNN language model trained on Samuel Butler's Erewhon. But I didn't do that exactly (https://github.com/douglasbagnall/recur/blob/master/text-predict.c).

A small character level RNN model will be more space efficient that a word level ngram Markov model, and it's output more interesting , though just as nonsensical.

Whether this poisoning approach has any effect is another matter (given the web is already just SEO bilgewater and the LLMs are poisoning each other anyway).

=> View attached media

=> More informations about this toot | View the thread | More toots from dbagnall@tldr.nettime.org

Mentions

=> View w0bb1t@tldr.nettime.org profile

Tags

Proxy Information
Original URL
gemini://mastogem.picasoft.net/toot/113877102777818340
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
259.275381 milliseconds
Gemini-to-HTML Time
0.696493 milliseconds

This content has been proxied by September (3851b).