Invited Talks
Frank (Chih-Yuan) Chiu
Georgia Institute of Technology
Talk Title
Constraint Learning in Multi Agent Dynamic Games from Demonstrations of Local Nash
Interactions
Abstract
Robots operating in crowded human-populated environments must be capable of inferring the interaction
constraints, such as collision avoidance specifications, that often underlie multi-agent behaviors. To
empower robots to learn interaction constraints, this talk presents an inverse dynamic game-based
framework for inferring parametric constraints from multi-agent interaction demonstrations at local Nash
equilibria. To recover constraints consistent with the local Nash stationarity of the given
demonstrations, we encode the corresponding Karush–Kuhn–Tucker (KKT) conditions within a mixed-integer
linear program (MILP). We establish theoretical guarantees that our method learns inner approximations of
the true safe and unsafe sets. We also use the recovered interaction constraint information to design
motion plans that robustly satisfy the true, a priori unknown constraints despite limited demonstration
data. Across simulations and hardware experiments, our method accurately infers constraints from
interaction demonstrations and leverages the inferred constraint information to design safe interactive
motion plans. We conclude by outlining ongoing and future work on extending the proposed framework to
enable active, parameterization-free, real-time, and online interaction constraint inference. The research
contributions presented in this talk are the result of collaborations with Zhouyu Zhang, Zheng Qiu, and
Dr. Glen Chou.
Relevant Publications
David Fridovich-Keil & Jingqi Li
The University of Texas at Austin
Talk Title
Towards Multi-Agent Strategic Autonomy: A Differentiable Game-Theoretic Perspective
Abstract
As autonomous systems scale to decentralized multi-agent settings, agents must make decisions in the
presence of others with limited and asymmetric information, across both cooperative and non-cooperative
interactions. This raises a fundamental question: how can we model, compute, and learn strategic decisions
in such environments?
This talk approaches this question from a differentiable game-theoretic perspective. By approximating
game-theoretic optimality conditions as differentiable equations, we enable efficient computation and
learning of equilibria under different information structures in complex dynamic games. I will present
scalable algorithms for solving nonlinear feedback dynamic games with convergence and safety guarantees,
along with inverse game-theoretic methods for inferring agents’ objectives and beliefs about others from
partial observations. I will then discuss how these components can be combined in a closed loop, enabling
agents to align with others or exploit information asymmetry when beneficial. We demonstrate these methods
in applications such as advanced air mobility, multi-robot furniture moving, and drone racing, including
hardware experiments, where decentralized agents coordinate in real time without direct communication.
Overall, these results suggest that differentiable game-theoretic structure enables efficient computation
and learning of multi-agent strategies in complex, interactive environments.
Zhaolin Ren & Na (Lina) Li
Mitsubishi Electric Research Laboratories & Harvard University
Talk Title
Scalable spectral representations for multiagent reinforcement learning in network
MDPs
Abstract
Efficient learning in network Markov Decision Processes (MDPs), a fundamental model in multi-agent
reinforcement learning, is extremely challenging due to exponential growth of the global state-action space
with the number of agents. In this work, utilizing the exponential decay property of network dynamics, we
first derive scalable spectral local representations for multiagent reinforcement learning in network MDPs,
which induces a network linear subspace for the local Q-function of each agent. Building on these local
spectral representations, we design a scalable algorithmic framework for multiagent reinforcement learning
in continuous state-action network MDPs, and provide end-to-end guarantees for the convergence of our
algorithm. Empirically, we validate the effectiveness of our scalable representation-based approach on two
benchmark problems, and demonstrate the advantages of our approach over generic function approximation
approaches to representing the local Q-functions.
Relevant Publications
Negar Mehr
University of California, Berkeley
Talk Title
Learning from Interactions: Imitation Learning in Game-Theoretic Multi-Agent Systems
Abstract
To truly transform our lives, autonomous systems must operate in complex environments shared with other
agents. For instance, delivery robots navigate spaces with humans, while warehouse robots must coordinate
on shared factory floors. These settings require systematic methods that enable efficient and reliable
interactions among multiple agents. In this talk, I will focus on imitation learning in interactive
multi-agent domains, with an emphasis on game-theoretic settings. In particular, I will discuss both
behavior cloning and inverse reinforcement learning in the context of strategic interactions among agents.
Unlike single-agent settings, where behavior can be learned from independent demonstrations, multi-agent
environments require reasoning about interdependent decisions, where each agent’s behavior is shaped by
others. I will highlight key challenges that arise in this setting, including the coupling between agents’
actions and the difficulty of inferring consistent objectives and policies from demonstrations of
interactions. I will then discuss approaches that leverage game-theoretic structure to make imitation
learning more tractable and effective in these domains.
Relevant Publications
Chinmay Maheshwari
Johns Hopkins University
Talk Title
Markov near-potential functions: A new paradigm for design and analysis of multi-agent
systems
Abstract
Learning enabled services are revolutionizing several engineering domains such as robotics, mobility,
energy, and online marketplaces. While significant progress has been made in developing autonomous agents
that operate in isolation, deploying them in dynamic, multi-agent environments presents new theoretical,
algorithmic, and societal challenges. Towards this goal, I will introduce a novel theoretical
framework—Markov near-potential functions (MNPFs)—to study multi-agent interactions in dynamic environments.
I will demonstrate how this framework can be leveraged to design competitive, real-time planning and control
strategies for autonomous multi-car racing. I will also present a new multi-agent reinforcement learning
pipeline – Near-Potential Policy Optimization – which exploits the structure of MNPFs to compute low-regret
approximate Nash strategies in general-sum dynamic games.