Multimodal Multi-Robot Coordination

Research Assistant
Agile and Intelligent Robotics Lab
- present
stochastic planner
model predictive control
mulit-agent control

As a research assistant at the Agile and Intelligent Robotics (AIRO) Lab, of the Laboratory for Computational Sensing and Robotics (LCSR), I worked under Dr. Joseph Moore and Mark Gonzales to explore multimodal path planning for navigation and multi-robot coordination. Our paper for the 2026 IEEE International Conference on Robotics & Automation (ICRA) is currently in review.

Background

Sampling-based planners, like Model Predictive Path Intergral (MPPI) and Cross-Entropy Method (CEM), typically pick the optimal path/trajectory out of a group of sample trajectories generated from a distribution. Then, after the robot moves along that trajectory for a bit, it creates a new distribution around its current trajectory and generates another group of samples, picking the new optimal one. With this strategy, robots can often find an efficient path from their starting path to their goal in real time.

However, in more challenging environments these planners can get stuck. This is due to their lack of exploration—whenever the robot moves forward one “step”, its trajectory options are limited to those around its current trajectory. In other words, as the robot moves along this trajectory, it becomes increasingly stubborn that it is on the right path. So, if a wall suddenly appears over its horizon the robot may not see any options that lead it around the wall.

To combat this, we have developed and implemented a multimodal cross-entropy planner approach that splits the sample trajectories into multiple groups, or “modes”, and for each mode creates a Gaussian distribution to sample from. This allows it to always explore while still following the optimal trajectory, leaving the robot options to escape from trap environments or avoid sudden obstacles including other robots.

In addition to this, we designed and implemented a distributed control global optimization algorithm that allows multi-robot teams to pick the trajectories for each robot that is best for everyone as a whole.

Hardware Experimental Setup

In order to conduct our hardware trials with distributed control (as opposed to centralized control), we needed to add an onboard computer to generate trajectories onto the car. We wanted to use a Traxxas remote control car with an NVIDIA Jetson AGX Orin for computation, a Pixracer Pro for servo control, and motion capture balls for state estimation.

Traxxas Maxx car with custom 3D printed clamps

To mount these equipments to the car, I designed a secure mounting plate with quick-release clamps and appropriate mounting features for each component.

mounting plate attachment example

After assembly and lots of debugging, testing, and calibration, we were able run several hardware experiments with promising results! This page will be updated in the near future.

hardware experimental setup

Example Hardware Trials

single mode planner--samples were not split, resulting in the the agents crashing into each other

two mode planner--samples were split into two groups, allowing the agents to avoid each other