Thinking of this today:
https://en.m.wikipedia.org/wiki/Indiana_pi_bill
Scientists might wish that science was not political. That was never true, but doubly so when politicians are trying to make denial of reality into law.
=> More informations about this toot | View the thread
From what I've seen DeepSeek is more efficient though engineering tricks rather than better ML? Doesn't this mean it's not as disruptive as thought? Big players can just use the same tricks now and still have advantage from bigger compute?
=> More informations about this toot | View the thread
Hahaha, from the EU's summary of the impact of the Apollo project: "The project established the technological pre-eminence of the US over other nations in space sector, so it accomplished the political goals for which it was created."
https://research-and-innovation.ec.europa.eu/document/download/3fb7597a-1680-4dff-86a3-1b82b02ad4b0_en?filename=mission_oriented_r_and_i_policies_case_study_report_apollo_project-us.pdf&prefLang=it
=> More informations about this toot | View the thread
Thinking about big science, the human genome project looms large. I'm not an expert so: did this project fulfil its potential? From what I've seen, it seems the major achievement was development of sequencing tech and a focus on open data. Otherwise, people still talking in terms of potential value?
Apologies in advance if this comes across as ignorant, I really don't know much about genetics.
=> More informations about this toot | View the thread
@GabrielBena
Finally, we made use of the fact that our networks are recurrent (which was a necessary restriction of the simple architecture that we used) and checked how specialisation changed over time. Intriguingly, it decreases over time. But there's more.
This drop in specialisation happens faster the more synapses between the modules and the less noise there is. This looks like maybe specialisation falls simply as a result of how much net communication bandwidth there is between the modules.
This raises the question: maybe specialisation isn't as simple as we think. Perhaps to some extent it's just a measurement artifact of limited communication bandwidth? Or maybe, understanding information flow is key to building systems that can specialise and generalise?
These are the sorts of questions we're following up on now, hopefully we'll have more to say about that soon, but in the meantime we'd love to discuss some of the issues and questions raised with you all.
What do you think?
=> More informations about this toot | View the thread
@GabrielBena
Our intuition suggested that resource constraints are likely to be important: there's little incentive to specialise if you have infinite resources. Sure enough, when we did large parameter sweeps we see that you get more specialisation when resources (neurons, synapses) are tight.
This seems like an important insight: we see that resource constrained biological brains are great at generalisation, an expected outcome of having specialised modules with generalisable functions, while machine learning systems are not. Maybe we give them too much computational power? 🤯
=> More informations about this toot | View the thread
@GabrielBena
By varying the number of connections between the modules we can fully span the possible range of structural modularity (measured with the widely used graph-theoretic Q metric), train on the task using backprop, then measure how much each module specialises on its own inputs.
Good news! Firstly, all the measures of specialisation qualitatively agree, so we're measuring something real 🤞. When the two modules are fully connected to each other, we don't see any specialisation and when they're maximally modular we do. This is what we'd expect. 😅 All good then? Well...
What surprised us is how much structural modularity you need before you observe specialisation. You need Q>0.4, higher than you observe in the brain. So does this mean that structural and functional modularity are unrelated in practice? Not necessarily, there could be other mechanisms at play.
=> More informations about this toot | View the thread
@GabrielBena
Some of these come from Fodor, and later Shallice and Cooper: modules should have a sub-function, respond to only one type of input, impairing them shouldn't impair other modules, and have limited access to information outside their state. Can we quantify these? We came up with three quantified measures of functional modularity based on:
(1) probing (can we infer information a module shouldn't have from its activity),
(2) ablation (which sub-functions are impaired),
(3) dependency on data that should be irrelevant (with correlation).
We also designed a task and network designed to have maximal, controllable modularity. There are two modules (dense recurrent neural networks) with sparse interconnections. Each receives a separate input. Solving the task requires they share precisely one bit of information.
Roughly speaking, the task is that each module is given one digit to observe. If the parity of the two digits is the same (both even or both odd) then return the first digit, otherwise the second digit. You can solve this by having each module only communicate one parity bit to the other.
=> More informations about this toot | View the thread
@GabrielBena
It could be the case that they're entirely separate: functional modules that don't overlap the structural modules at all. We often look for spatial maps in the brain, but the existence of salt-and-pepper maps shows that the brain doesn't have to organise spatially.
It could be somewhere in between, with functional modules partially overlapping structural modules, which would explain why we can observe partial functional deficits after lesions to some but not all areas.
Or maybe the brain isn't constrained to have anything that we would recognise as functional modularity at all? We won't get too deeply into that possibility, but we did ask what are the features we would expect based on our intuitions about modularity?
=> More informations about this toot | View the thread
What's the right way to think about modularity in the brain? This devilish 😈 question is a big part of my research now, and it started with this paper with @GabrielBena finally published after the first preprint in 2021!
https://www.nature.com/articles/s41467-024-55188-9
We know the brain is physically structured into distinct areas ("modules"?). We also know that some of these have specialised function. But is there a necessary connection between these two statements? What is the relationship - if any - between 'structural' and 'functional' modularity?
TLDR if you don't want to read the rest: there is no necessary relationship between the two, although when resources are tight, functional modularity is more likely to arise when there's structural modularity. We also found that functional modularity can change over time! Longer version follows.
[#]Neuroscience #CompNeuro #ComputationalNeuroscience
=> More informations about this toot | View the thread
Wonder if this will be enough for the royal society to kick him out @deevybee ?
=> More informations about this toot | View the thread
Wow! The much touted FrontierMath dataset was secretly funded by OpenAI who had privileged access to it.
https://techcrunch.com/2025/01/19/ai-benchmarking-organization-criticized-for-waiting-to-disclose-funding-from-openai/
=> More informations about this toot | View the thread
Crows gathering. #photography
=> More informations about this toot | View the thread
Which is worse? Agonising wrist pain or accepting that the phase of my daughter's life where her dad throws her up in the air and catches her is over? 😢 Yeah it's the latter.
=> More informations about this toot | View the thread
Oh no it's finally happened. My university has disabled non-Microsoft apps from accessing Exchange, meaning I have to actually use Outlook. Nooooooooo! 😭😭😭
=> More informations about this toot | View the thread
😢 David Lynch.
=> More informations about this toot | View the thread
Some nice frost. #photography #NaturePhotography
=> More informations about this toot | View the thread
This is what it looked like before editing.
=> More informations about this toot | View the thread
Took this photo to try to work out a problem with my camera and while fiddling with the editing settings noticed that this picture looks kind of nice. Turned out there was a speck of dust on the sensor btw, fixed now. #photography
=> More informations about this toot | View the thread
Anyone got advice on choosing or finding a Pixelfed server? Are they generally paid services because it's more expensive to host lots of images? Not sure of the general culture or expectations.
=> More informations about this toot | View the thread
=> This profile with reblog | Go to neuralreckoning@neuromatch.social account This content has been proxied by September (3851b).Proxy Information
text/gemini