This project aims to understand attentional mechanisms in cognitive tasks, focusing on the processing of music, speech, and rest, by confronting advanced computational and inference tools against invasive neuroimaging recordings Stéréo Electroencéphalogramme (SEEG recorded in epileptic patients). We will use simulation-based inference and generative modeling to explore neural dynamics underlying attention, leveraging tools like Recurrent Switching Linear Dynamical Systems (rsLDS), an unsupervised machine learning tool that helps us automatically identify distinct brain states from brain activity data, leveraging The Virtual Brain (TVB) and vbjax, as software tools for simulating brain activity developed at INS. We will explore integrating self-attention from transformer models into Neural Field Models (NFMs) to simulate cognitive processes. We also aim to develop The Virtual Brain (TVB) Agent, an AI assistant utilizing Large Language Models (LLMs) to automate brain simulation workflows for TVB. This interdisciplinary approach seeks to advance both our understanding of brain function and the efficiency of computational neuroscience research.
- Supervisors: M. Hashemi (Institut de neurosciences des systèmes), M. Woodman (Institut de neurosciences des systèmes)
- Master's student: Sadel Muwahed