@KuteboiCoder i'll take a look, but yeah. especially some of the stuff like liquid state machines are being tested on like, 2,000 neuron "reservoirs," and are detecting features in video. or KAN networks that are more expensive to train (because they have a multi-point b-spline for activator functions) but require so many less that you still come out ahead.
it also looks like deep learning is a meme. decoder/encoder networks seem to be fixed by genetics or some one-off initialization process, and only one or two layers at the back actually handle interpreting (and, maybe, communicate to themselves with loops--as is theorized with the phenological loop, esp. people talking to themselves to solve problems)
=> More informations about this toot | View the thread | More toots from icedquinn@blob.cat
=> View KuteboiCoder@subs4social.xyz profile
text/gemini
This content has been proxied by September (3851b).