I wrote a β¨ new blog post β¨ about privacy in AI: five things that privacy experts know about AI, but that might not be so obvious outside our community πΆβπ«οΈ
Link here οΈβ‘οΈ https://desfontain.es/blog/privacy-in-ai.html π
=> More informations about this toot | More toots from tedted@hachyderm.io
Maybe for the shortform social media audience I should have chosen one of the spicier diagrams as a teaser image π€
=> More informations about this toot | More toots from tedted@hachyderm.io
@tedted The trouble is that memorization is kind of inherent to the project. Or at least, somewhere between THE CORTEX AND THE CRITICAL POINT and Golden Gate Claude, that is the conclusion that Iβve reached.
=> More informations about this toot | More toots from kevinriggle@ioc.exchange
@tedted Predictive power is maximized when the network approaches the limit of one input neuron activation producing one output neuron activation
=> More informations about this toot | More toots from kevinriggle@ioc.exchange
@kevinriggle Yeah I wanted to keep the main text as concise as I could so I didn't get into detail, but this "memorization may just be a fundamental requirement" point is super fascinating to me (and seems important philosophically) so I point it out in the smaller text below.
=> More informations about this toot | More toots from tedted@hachyderm.io
@tedted Highly recommend this for the theory as well as some cutting-edge neuroscience which seems to bear it out: https://www.amazon.com/Cortex-Critical-Point-Understanding-Emergence/dp/0262544032
=> More informations about this toot | More toots from kevinriggle@ioc.exchange This content has been proxied by September (3851b).Proxy Information
text/gemini