Self-Improving and Self-Adapting Agents

🎯 Overview: We study test-time scaling (TTS) methods for self-improving and self-adapting agents, advancing a new paradigm of artificial intelligence in which autonomous systems do not merely act, but evolve reliably through learning from experience, refining behavior at test time, and autonomously modifying their own learning mechanisms.

🧠 Abstract: Although modern foundation models (FMs) like ChatGPT are extraordinarily capable, they remain largely fragile: small variations in input (aka ‘‘prompts’’), such as subtle phrasing changes or unintended noise, can lead to contradictory or ungrounded reasoning. This limits their broader deployment in high-stakes domains like healthcare, law, and science, where precision, reliability, and interpretability are essential.

In this project, we aim to build the next generation of FM systems that can continually assess and correct their behavior at test time, making them more trustworthy and capable. Our studies span three main aspects:

Our findings so far have offered a foundation for building foundation model agents that reliably improve through interaction, feedback, and structure-aware test-time methods—without reliance on additional gradient-based training.

🔥 News: We are expanding our studies towards multiple directions. Please review our work above. If you have a strong interest in the topics listed and a solid background in mathematics and programming, and optionally in research, come to join us!

Xuan Long Do
Xuan Long Do
A*STAR Doctoral Student (Aug ‘23)
Co-Supervised by Kenji Kawaguchi

PhD Candidate August 2023 Intake

Hai N. Nguyen
Hai N. Nguyen
Research Intern (Jan ‘25)

My name is Hai, current AI Research Resident at VinAi. I have graduated from Vietnam National University, University of Science (Vietnam). I’m interested in Optimization, Optimal Transport and Large Language Models.

Duy C. Dinh
Duy C. Dinh
Research Intern (Jan ‘25)

My name is Duy. I am currently working as an AI Engineer at Creative Force and graduated from Hanoi University of Science and Technology (HUST). With a strong foundation in machine learning research and a growing passion for Generative AI, I seek opportunities to contribute to meaningful and impactful research.

Trong Xuan Do
Trong Xuan Do
Research Intern (Jan ‘25)

Trong recently graduated from Hanoi University of Science and Technology (HUST). His research focuses on deep learning and improving mathematical reasoning capalibities of large language models (LLMs).

Yiwen Wang
Yiwen Wang
Research Intern (June ‘25)

Research Intern

Duc Anh Nguyen
Duc Anh Nguyen
Research Intern (Jul ‘25)

My name is Duc Anh, I am from Vietnam. Currently, I am working at Qualcomm AI Research as an AI Resident. My research focuses on the intersection of theory and application of Large Language Models, with the goal of improving the efficiency, scalability and robustness of state-of-the-art models.

Min-Yen Kan
Min-Yen Kan
Associate Professor

WING lead; interests include Digital Libraries, Information Retrieval and Natural Language Processing.