Overview of AutoWorld. A LiDAR-based world model is first trained on unlabeled sequences to learn latent scene dynamics. At simulation time, the trained model predicts future occupancies from the observed LiDAR history, which condition the motion generation module. At inference, we sample from both the world model and the motion generator using a training-free cascaded latent diversity strategy.
Abstract
Multi-agent traffic simulation is central to developing and testing autonomous driving systems. Recent data-driven simulators have achieved promising results, but rely heavily on supervised learning from labeled trajectories or semantic annotations, making it costly to scale their performance. Meanwhile, large amounts of unlabeled sensor data can be collected at scale but remain largely unused by existing traffic simulation frameworks. This raises a key question: How can a method harness unlabeled data to improve traffic simulation performance?
In this work, we propose AutoWorld, a traffic simulation framework that employs a world model learned from unlabeled occupancy representations of LiDAR data. Given world model samples, AutoWorld constructs a coarse-to-fine predictive scene context as input to a multi-agent motion generation model. To promote sample diversity, AutoWorld uses a cascaded Determinantal Point Process framework to guide the sampling processes of both the world model and the motion model. Furthermore, we designed a motion-aware latent supervision objective that enhances AutoWorld's representation of scene dynamics.
Experiments on the WOSAC benchmark show that AutoWorld ranks first on the leaderboard according to the primary Realism Meta Metric (RMM). We further show that simulation performance consistently improves with the inclusion of unlabeled LiDAR data, and study the efficacy of each component with ablations. Our method paves the way for scaling traffic simulation realism without additional labeling.
AutoWorld Harnesses Unlabeled Sensor Data
Traffic simulation aims to generate realistic multi-agent motion, reflecting the inherently multimodal distribution of possible future outcomes. Learning such a distribution with supervised methods typically requires large-scale annotated trajectory data. However, collecting high-quality motion labels at scale is time-consuming and expensive, limiting the diversity and coverage of driving scenarios available for training. In contrast, raw sensor data such as LiDAR can be collected at scale without manual annotation.
The main idea behind our proposed semi-supervised framework is to complement supervised motion generation with a self-supervised predictive world modeling stage that leverages unlabeled data to implicitly learn scene dynamics and, by doing so, provide structured future context that improves downstream motion generation. We instantiate this idea in AutoWorld, a traffic simulation framework that formulates behavior generation as a generative process conditioned on a predictive world model of the environment. In specific, we first learn a motion-aware latent occupancy world model in a fully self-supervised manner from unlabeled LiDAR data, forecasting future latent occupancies. This stage serves as a scalable pretraining phase that implicitly captures motion-driven scene dynamics without requiring semantic or trajectory annotations. From the predicted future latent occupancies, we construct a predictive scene context that summarizes anticipated scene evolution. We then generate agent motions using a conditional diffusion model given both the future latent sequence and the predictive scene context, forming a coarse-to-fine representation of the future scene. Finally, to promote a variety of plausible multi-agent behaviors, we introduce a cascaded latent diversity strategy applied at inference time.
Results
Results on the WOSAC leaderboard. RMM (Realism Meta Metric) is the primary ranking metric. The "†" denotes technical reports for the Waymo challenge.
| Model | Reference | RMM(↑) | Kinematic(↑) | Interactive(↑) | Map-based(↑) | minADE(↓) |
|---|---|---|---|---|---|---|
| InfGen | ICLR 2026 | 0.7731 | 0.4493 | 0.8084 | 0.9127 | 1.4252 |
| LLM2AD | CoRL 2025 | 0.7779 | 0.4846 | 0.8048 | 0.9109 | 1.2827 |
| SMART-tiny | NeurIPS 2024 | 0.7814 | 0.4854 | 0.8089 | 0.9153 | 1.3931 |
| UniMM† | - | 0.7829 | 0.4914 | 0.8089 | 0.9161 | 1.2949 |
| UniMotion | NeurIPS 2025 | 0.7851 | 0.4943 | 0.8105 | 0.9187 | 1.3036 |
| TrajTok | ICLR 2026 | 0.7852 | 0.4887 | 0.8116 | 0.9207 | 1.3179 |
| CAT-K | CVPR 2025 | 0.7846 | 0.4931 | 0.8106 | 0.9177 | 1.3065 |
| RLFTSim† | - | 0.7857 | 0.4927 | 0.8129 | 0.9183 | 1.3252 |
| SMART-R1 | ICLR 2026 | 0.7858 | 0.4944 | 0.8110 | 0.9201 | 1.2885 |
| DecompGAIL | ICLR 2026 | 0.7864 | 0.4919 | 0.8152 | 0.9176 | 1.4209 |
| AutoWorld (Ours) | - | 0.7865 | 0.4931 | 0.8143 | 0.9185 | 1.3051 |
Ablation studies on WOSAC
Ablation study on WOSAC 2% validation split. The "†" denotes the model we used to submit on the leaderboard.
| Model | Samp. Occ. |
Samp. Mot. |
N x M | RMM(↑) | Kinematic(↑) | Interactive(↑) | Map-based(↑) | minADE(↓) |
|---|---|---|---|---|---|---|---|---|
| Full AutoWorld (Cascaded Diversity; only meaningful when N > 1 and M > 1) | ||||||||
| AutoWorld | Pμ | Pf | 4 x 8† | 0.7746 | 0.4878 | 0.7968 | 0.9099 | 1.3102 |
| 8 x 4 | 0.7728 | 0.4889 | 0.7948 | 0.9067 | 1.3097 | |||
| W/o scene diversity | IID | Pf | 1 x 32 | 0.7664 | 0.4827 | 0.7969 | 0.8894 | 1.3244 |
| 4 x 8 | 0.7648 | 0.4846 | 0.7926 | 0.8892 | 1.3362 | |||
| 8 x 4 | 0.7671 | 0.4873 | 0.7943 | 0.8921 | 1.3589 | |||
| W/o motion diversity | Pμ | IID | 4 x 8 | 0.7563 | 0.4833 | 0.7758 | 0.8871 | 1.4138 |
| 8 x 4 | 0.7539 | 0.4827 | 0.7723 | 0.8853 | 1.4093 | |||
| 32 x 1 | 0.7492 | 0.4804 | 0.7651 | 0.8824 | 1.3894 | |||
| W/o diversity | IID | IID | 1 x 32 | 0.7610 | 0.4849 | 0.7855 | 0.8873 | 1.4369 |
| 4 x 8 | 0.7597 | 0.4840 | 0.7839 | 0.8861 | 1.4256 | |||
| 8 x 4 | 0.7618 | 0.4846 | 0.7876 | 0.8870 | 1.4334 | |||
| 32 x 1 | 0.7523 | 0.4796 | 0.7728 | 0.8819 | 1.4178 | |||
| W/o WM | - | Pf | - x 32 | 0.7549 | 0.4745 | 0.7848 | 0.8766 | 1.4279 |
| IID | 0.7472 | 0.4749 | 0.7741 | 0.8682 | 1.4892 | |||
Motion-Aware Latent Supervision
Effect of motion-aware latent supervision on world modeling and downstream traffic simulation. By focusing on dynamic occupancy transitions, the world-model quality and traffic simulation performance improve without requiring semantic labels.
| World model | Traffic simulation | ||||
|---|---|---|---|---|---|
| Variant | FVD(↓)(x10-3) | IoUstatic(↑) | IoUdynamic(↑) | RMM(↑) | minADE(↓) |
| Semantic-WM | 23 | 0.89 | 0.73 | 0.7651 | 1.4081 |
| NonSem Uniform-RF | 51 | 0.85 | 0.58 | 0.7547 | 1.4826 |
| NonSem MA-RF | 35 | 0.87 | 0.71 | 0.7618 | 1.4334 |
Scaling with Unlabeled Data
RMM vs. amount of unlabeled data in world-model training: Adding unlabeled LiDAR sequences enhances AutoWorld's simulation realism. Baselines without a world model can't use unlabeled data. In the full AutoWorld setting, the baseline employs motion diversity, while it uses IID sampling in the IID/IID setting.
Simulated Scenarios
Qualitative Analysis
Qualitative analysis of multimodal behavior coverage. The ground-truth ego trajectory, candidate SDC paths from WOMD, and 32 rollouts generated by AutoWorld are visualized (left to right). The model produces diverse behaviors within a single scenario, including alternative turning directions and lane changes, with diversity that also reflects different kinematic profiles.
Citation
If you find this project helpful, please cite us: