Reasoning and Planning for Large Language Models

Screenshot: ICLR Workshops

Abstract

This workshop explores the growing capabilities of large language models (LLMs), such as OpenAI’s o1 model, in reasoning, planning, and decision-making, highlighting recent advances and challenges. We aim to examine how reinforcement learning methods, post-training optimization, and efficient inference techniques can further enhance LLMs’ reasoning capabilities. Topics include training approach for enhancing reasoning and planning abilities, scaling inference for complex tasks, developing robust benchmarks, and extending LLMs to multi-modal and embodied environments. We will also discuss broader themes such as causal reasoning, collaborative multi-agent systems, uncertainty, and explainability to offer insights and guidance for the further development of reasoning and planning in LLMs.

Date
28 Apr, 2025
Location
Singapore Expo
1 Expo Drive, Singapore, Singapore 486150
Jiaying Wu
Jiaying Wu
Research Fellow (Jul ‘24)

Postdoctoral Research Fellow at WING & NUS CTIC

Min-Yen Kan
Min-Yen Kan
Associate Professor

WING lead; interests include Digital Libraries, Information Retrieval and Natural Language Processing.