@GabrielBena
By varying the number of connections between the modules we can fully span the possible range of structural modularity (measured with the widely used graph-theoretic Q metric), train on the task using backprop, then measure how much each module specialises on its own inputs.
Good news! Firstly, all the measures of specialisation qualitatively agree, so we're measuring something real 🤞. When the two modules are fully connected to each other, we don't see any specialisation and when they're maximally modular we do. This is what we'd expect. 😅 All good then? Well...
What surprised us is how much structural modularity you need before you observe specialisation. You need Q>0.4, higher than you observe in the brain. So does this mean that structural and functional modularity are unrelated in practice? Not necessarily, there could be other mechanisms at play.
=> More informations about this toot | View the thread | More toots from neuralreckoning@neuromatch.social
=> View GabrielBena@neuromatch.social profile
text/gemini
This content has been proxied by September (3851b).