From Connectomes to Digital Twins: Forecasting the Brain in Real Time
Mapping the Living Mind: From Wiring Diagrams to Neural Forecasting
Scientists have spent years trying to figure out how the biological brain works by looking at it from two different angles. One group has focused on connectomics, which is basically mapping the physical wiring of the brain. The other group has looked at functional imaging, or watching neurons fire in real time. We are now seeing these two fields merge through advanced AI to create what researchers call a digital twin of the brain. This move goes beyond just taking high-resolution pictures. It is about building models that can actually predict what a brain will do next.
Building the Physical Maps
The foundation of this work is the wiring diagram. We recently saw a massive milestone with the completion of the central brain connectome for the adult fruit fly, Drosophila melanogaster. This map includes more than 125,000 neurons and 50 million synaptic connections. While a fly brain is small, the data is incredibly complex. A single neuron might connect to hundreds of others, making it very difficult to understand how these paths lead to specific behaviors.
We are seeing similar progress in humans too. Researchers recently reconstructed a tiny fragment of the human cerebral cortex. Even though it was only one cubic millimeter in size, it required over a petabyte of data to map at a nanoscale resolution. These physical maps have shown us things we never knew existed, like neurons that form unusual triangular shapes. However, as many experts have pointed out, a connectome is just a map. It does not tell us how the “traffic” of neural activity moves through those wires.
Predicting the Traffic of the Brain
To solve this, researchers are turning to neural forecasting. One of the most important tools in this area is the Zebrafish Activity Prediction Benchmark, or ZAPBench. It uses light sheet microscopy to record the activity of over 70,000 neurons in larval zebrafish. This is currently the only vertebrate where we can see the whole brain active at once at such a high resolution.
By using models originally built for weather forecasting, like those in WeatherBench, scientists are testing how well AI can predict the next 30 seconds of a brain’s activity based on just a few seconds of history. This is a massive shift in how we study neuroscience. Instead of just describing what happened, we are trying to forecast what will happen.
Several new techniques are making this possible:
- Volumetric Video Models: Instead of just looking at individual neuron signals, new models like 4D UNets look at the raw 3D video over time. This helps the AI understand the spatial relationships between neurons that other methods might miss.
- Foundation Models: Just like the models that power modern chat tools, new foundation models of the mouse visual cortex are being trained on huge amounts of data. These models can be applied to new animals they have never seen before, successfully predicting how their neurons will react to new videos.
- Classification Strategies: New architectures like QuantFormer are changing the way we think about brain signals. Instead of trying to predict a continuous wave of activity, they treat neural spikes like a classification problem. This has proven much more effective at capturing the quick, sparse bursts of energy that define how neurons communicate.
Why Global Brain States Matter
One of the biggest hurdles in this research is that a single neuron does not act alone. Its behavior is often influenced by the global state of the brain, such as whether an animal is alert or performing a specific task. A model called POCO, which stands for Population Conditioned forecaster, handles this by looking at local neuron dynamics while also considering the overall state of the entire population. This helps the model understand how shared brain structures influence individual cells.
Future Applications and Interventions
The goal of this research is not just to understand the brain but to interact with it. If we can forecast neural activity in real time, we can develop systems that intervene before something goes wrong. Some models can now run in as little as 3.5 milliseconds. This speed could allow for closed-loop optogenetic interventions, where light is used to stimulate neurons to stop a seizure or a specific craving before the person even realizes it is happening.
We are moving into an era where we can see inside ourselves with the same clarity that we see the world around us. While managing petabytes of data is a major challenge, combining physical maps with AI forecasting brings us much closer to a true mechanistic understanding of intelligence.
This post was written with the help of AI for analysis, using the NotebookLM shared resource here: https://notebooklm.google.com/notebook/74dc7f14-54cb-481b-9ee8-8347a6f5cba1
References and Research Links
- A Drosophila computational brain model reveals sensorimotor processing https://doi.org/10.1038/s41586-024-07763-9
- A connectome and analysis of the adult Drosophila central brain https://doi.org/10.7554/eLife.57443
- A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution https://doi.org/10.1126/science.adk4858
- Foundation model of neural activity predicts response to new stimulus types https://doi.org/10.1038/s41586-025-08829-y
- POCO: Scalable Neural Forecasting through Population Conditioning https://arxiv.org/abs/2410.18025
- QuantFormer: Learning to Quantize for Neural Activity Forecasting https://arxiv.org/abs/2405.17140
- ZAPBench: A Benchmark for Whole-Brain Activity Prediction in Zebrafish https://openreview.net/forum?id=oCHsDpyawq
- Forecasting Whole-Brain Neuronal Activity from Volumetric Video https://arxiv.org/abs/2503.00073
- A connectome is not enough – what is still needed to understand the brain of Drosophila? https://doi.org/10.1242/jeb.242740