Ambroise Odonnat

Ph.D. student at Huawei Noah's Ark Lab and Inria.
Jointly supervised by Ievgen Redko, Romain Tavenard and Laetitia Chapel.

prof_pic_odonnat.jpg

In front of TUM in Munich

I am Ambroise Odonnat, a first-year Ph.D. student at Huawei Noah’s Ark Lab and Inria working on transformers and distribution shifts.

I leverage various mathematical tools to better understand and empirically improve Transformers in settings where training and test data distributions differ. I am also interested in the optimization of neural networks.

Previously, I obtained my master’s degree at ENS Paris-Saclay in 2023 from the Mathematics, Vision, and Machine Learning (MVA) program. I also hold an engineering degree from Ecole des Ponts ParisTech in mathematics and computer science.

I maintain a research blog called logB with my friend Oussama Zekri. Feel free to check it out 🙃. Don’t hesitate to reach out for possible collaborations or questions regarding my research!

news

Jan 30, 2025 📑 New preprint on the training dynamics in Transformers: Clustering Heads.
Jan 22, 2025 🥳 DICL was accepted @ICLR 2025.
Dec 18, 2024 🥳 Easing Optimization Paths: A Circuit Perspective was accepted @ICASSP 2025.
Oct 02, 2024 📑 New preprint: Large Language Models as Markov Chains.
Sep 25, 2024 🥳 2 papers @ NeurIPS 2024: a spotlight here and MaNo as a poster.

selected publications

  1. Easing Optimization Paths: A Circuit Perspective
    Ambroise Odonnat* , Wassim Bouaziz* , and Vivien Cabannes
    ICASSP, 2025.
  2. A Visual Case Study of the Training Dynamics in Neural Networks
    Ambroise Odonnat , Wassim Bouaziz , and Vivien Cabannes
    Preprint, 2024.
  3. Large Language Models as Markov Chains
    Oussama Zekri* ,  Ambroise Odonnat* , Abdelhakim Benecheab , and 3 more authors
    Preprint, 2024.
  4. MANO: Exploiting Matrix Norm for Unsupervised Accuracy Estimation Under Distribution Shifts
    Renchunzi Xie* ,  Ambroise Odonnat* , Vasilii Feofanov* , and 3 more authors
    NeurIPS, 2024.
  5. SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
    Romain Ilbert* ,  Ambroise Odonnat* , Vasilii Feofanov , and 4 more authors
    ICML Oral, 2024.
  6. Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias
    Ambroise Odonnat , Vasilii Feofanov , and Ievgen Redko
    AISTATS, 2024.