文献精选

Mateo Ve´ lez-Fort,1,3 Lee Cossell,1,3 Laura Porta,1 Claudia Clopath,1,2 and Troy W. Margrie1,4,*

1 Sainsbury Wellcome Centre, University College London, London, UK

2 Bioengineering Department, Imperial College London, London, UK

3 These authors contributed equally

4 Lead contact

*Correspondence: 该Email地址已收到反垃圾邮件插件保护。要显示它您需要在浏览器中启用JavaScript。

https://doi.org/10.1016/j.cell.2025.01.032

SUMMARY

Knowing whether we are moving or something in the world is moving around us is possibly the most critical sensory discrimination we need to perform. How the brain and, in particular, the visual system solves this motion-source separation problem is not known. Here, we find that motor, vestibular, and visual motion signals are used by the mouse primary visual cortex (VISp) to differentially represent the same visual flow information according to whether the head is stationary or experiencing passive versus active translation. During locomotion, we find that running suppresses running-congruent translation input and that translation signals dominate VISp activity when running and translation speed become incongruent. This cross-modal interaction between the motor and vestibular systems was found throughout the cortex, indicating that running and translation signals provide a brain-wide egocentric reference frame for computing the internally generated and actual speed of self when moving through and sensing the external world.

Ugne Klibaite,1,4,* Tianqing Li,2,4 Diego Aldarondo,1,3 Jumana F. Akoad,1 Bence P. O¨ lveczky,1,* and Timothy W. Dunn2,5,*

1  Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA

2  Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA

3  Present address: Fauna Robotics, New York, NY 10003, USA

4  These authors contributed equally

5  Lead contact

*Correspondence: 该Email地址已收到反垃圾邮件插件保护。要显示它您需要在浏览器中启用JavaScript。 (U.K.), 该Email地址已收到反垃圾邮件插件保护。要显示它您需要在浏览器中启用JavaScript。 (B.P.O¨ .), 该Email地址已收到反垃圾邮件插件保护。要显示它您需要在浏览器中启用JavaScript。 (T.W.D.)

https://doi.org/10.1016/j.cell.2025.01.044

SUMMARY

Social interaction is integral to animal behavior. However, lacking tools to describe it in quantitative and rigorous ways has limited our understanding of its structure, underlying principles, and the neuropsychiatric disorders, like autism, that perturb it. Here, we present a technique for high-resolution 3D tracking of postural dynamics and social touch in freely interacting animals, solving the challenging subject occlusion and partassignment problems using 3D geometric reasoning, graph neural networks, and semi-supervised learning. We collected over 110 million 3D pose samples in interacting rats and mice, including seven monogenic autism rat lines. Using a multi-scale embedding approach, we identified a rich landscape of stereotyped actions, interactions, synchrony, and body contacts. This high-resolution phenotyping revealed a spectrum of changes in autism models and in response to amphetamine not resolved by conventional measurements. Our framework and large library of interactions will facilitate studies of social behaviors and their neurobiological  underpinnings.

Jack Stanley,1,2,6 Emmett Rabot,3,4,6 Siva Reddy,1 Eugene Belilovsky,1,5 Laurent Mottron,3,4,7 and Danilo Bzdok1,2,7,8,*

1 Mila - Que´ bec Artificial Intelligence Institute, Montre´ al, QC H2S3H1, Canada

2 The Neuro - Montre´ al Neurological Institute (MNI), McConnell Brain Imaging Centre, Department of Biomedical Engineering, Faculty of Medicine, School of Computer Science, McGill University, Montre´ al, QC H3A2B4, Canada

3 Research Center, Centre Inte´ gre´ Universitaire de Sante´ et de Services Sociaux du Nord-de-l’Ile-de-Montre´ al (CIUSSS-NIM), Montre´ al, QC H4K1B3, Canada

4 Universite´ de Montre´ al, Montre´ al, QC H3C3J7, Canada

5 Department of Computer Science and Software Engineering, Concordia University, Montreal, QC H3G 1M8, Canada

6 These authors contributed equally

7 These authors contributed equally

8 Lead contact

*Correspondence: 该Email地址已收到反垃圾邮件插件保护。要显示它您需要在浏览器中启用JavaScript。

https://doi.org/10.1016/j.cell.2025.02.025

SUMMARY

Efforts to use genome-wide assays or brain scans to diagnose autism have seen diminishing returns. Yet the clinical intuition of healthcare professionals, based on longstanding first-hand experience, remains the gold standard for diagnosis of autism. We leveraged deep learning to deconstruct and interrogate the logic of expert clinician intuition from clinical reports to inform our understanding of autism. After pre-training on hundreds of millions of general sentences, we finessed large language models (LLMs) on >4,000 free-form health records from healthcare professionals to distinguish confirmed versus suspected autism cases. By introducing an explainability strategy, our extended language model architecture could pin down the most salient single sentences in what drives clinical thinking toward correct diagnoses. Our framework flagged the most autism-critical DSM-5 criteria to be stereotyped repetitive behaviors, special interests, and perception-based behaviors, which challenges today,s focus on deficits in social interplay, suggesting necessary revision of long-trusted diagnostic criteria in gold-standard instruments.

第1页 共429页