I have a question, following a discussion we had with colleagues at lunch: for you, is it a problem that the backpropagation algorithm does not have a clearly identified biological mechanisms?
I have the feeling that this problem was considered important 20 years ago, but I don't see so many people mentionning it nowadays. I am not sure a definitive answer to this problem has been given (if so, tell me about it!), and, well, deep learning is just backprop, backprop everywhere...
=> More informations about this toot | More toots from BenoitGirard@sciences.re
@BenoitGirard There are many simplifications in artificial neurons. Like, how many different biological factors are hidden behind every number in a weight matrix (if connection is excitatory or inhibitory, synaptic strength).
Backprop is also a combination of things: feedback connections (that technically will disappear when the the training is over), Hebbian rule (but across all layers). And there is a strange effect when an axon receives a feedback that goes back to the soma.
=> More informations about this toot | More toots from mikolasan@mastodon.social This content has been proxied by September (3851b).Proxy Information
text/gemini