Ambroise Odonnat

prof_pic_odonnat.jpg

In front of TUM in Munich

I am a second-year Ph.D. student in Paris between Huawei Noah’s Ark Lab & Inria supervised by Romain Tavenard, Laetitia Chapel, and Ievgen Redko.

I am interested in better understanding transformers by conducting theoretical study and large-scale experiments on:

  • Large language models (e.g., here and here)
  • Transformers training and finetuning (e.g., here and here)
  • Out-of-distribution generalization (e.g., here, here and here)
  • Vision transformers and time Series forecasting (see here and here)

I was lucky to receive an ICML Oral Award, an ICASSP Oral Award, and a QBIN Best Flash Talk Award for my research in these areas. On a more amusing (and surprising 🙃) note, one of my recent articles was featured in Forbes.

I enjoy working both with a few collaborators and as part of a larger team, contributing to open-source libraries and communicating about my research. I maintain a research blog, logB, and have had the privilege of presenting my research at leading institutions such as EPFL, Mila, Imperial, Cohere, and Kyutai.

I graduated from Ecole des Ponts ParisTech in 2023 and hold a master’s degree from ENS Paris-Saclay in Mathematics, Vision, and Machine Learning (MVA).

Don’t hesitate to reach out for possible collaborations or questions regarding my research!

news

Apr 27, 2026 🍍2 papers accepted at ICLR workshops (on LLM tool use and probing ViT)!
Mar 27, 2026 🤗 Very happy to give a talk at Mila on the role of smoothness in ViT finetuning!
Feb 06, 2026 📑 New preprint on the role of smoothness in Vision Transformer finetuning.
Dec 07, 2025 🥳 Very happy to co-organize the NeurIPS BERTs workshop on TSFMs!
Dec 06, 2025 🥳 One paper accepted at NeurIPS workshop on LLM efficient reasoning!

selected publications

  1. Layer by layer, module by module: choose both for optimal OOD probing of ViT
    Ambroise Odonnat , Vasilii Feofanov , Laetitia Chapel , and 2 more authors
    ICLR Workshop CAO, 2026.
  2. Provable Benefits of In-Tool Learning for Large Language Models
    Sam Houlison* ,  Ambroise Odonnat* , Charles Arnal* , and 1 more author
    ICLR Workshop MemAgents, 2026.
  3. Vision Transformer Finetuning Benefits from Non-Smooth Components
    Ambroise Odonnat , Laetitia Chapel , Romain Tavenard , and 1 more author
    Preprint, 2026.
  4. Optimal Self-Consistency for Efficient Reasoning with Large Language Models
    Austin Feng , Marius Alonso ,  Ambroise Odonnat , and 2 more authors
    NeurIPS Workshop Efficient Reasoning, 2025.
  5. SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation On Diverse Modalities
    Yanis Lalou* , Théo Gnassounou* , Antoine Collas* , and 6 more authors
    TMLR, 2025.
  6. Leveraging Gradients for Unsupervised Accuracy Estimation under Distribution Shift
    Renchunzi Xie ,  Ambroise Odonnat , Vasilii Feofanov , and 3 more authors
    TMLR, 2025.
  7. Easing Optimization Paths: A Circuit Perspective
    Ambroise Odonnat* , Wassim Bouaziz* , and Vivien Cabannes
    ICASSP Oral, 2025.
  8. Large Language Models as Markov Chains
    Oussama Zekri* ,  Ambroise Odonnat* , Abdelhakim Benecheab , and 3 more authors
    Preprint, 2024.
  9. MANO: Exploiting Matrix Norm for Unsupervised Accuracy Estimation Under Distribution Shifts
    Renchunzi Xie* ,  Ambroise Odonnat* , Vasilii Feofanov* , and 3 more authors
    NeurIPS, 2024.
  10. SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
    Romain Ilbert* ,  Ambroise Odonnat* , Vasilii Feofanov , and 4 more authors
    ICML Oral, 2024.
  11. Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias
    Ambroise Odonnat , Vasilii Feofanov , and Ievgen Redko
    AISTATS, 2024.