A Practical Large-Scale Roadside Multi-View Multi-Sensor Spatial
Synchronization Framework for Intelligent Transportation Systems
Abstract
Spatial synchronization in roadside scenarios is essential for
integrating data from multiple sensors at different locations. Current
methods using cascading spatial transformation (CST) often lead to
cumulative errors in large-scale deployments. Manual camera calibration
is insufficient and requires extensive manual work, and existing methods
are limited to controlled or single-view scenarios. To address these
challenges, our research introduces a parallel spatial transformation
(PST)-based framework for large-scale, multi-view, multi-sensor
scenarios. PST parallelizes sensor coordinate system transformation,
reducing cumulative errors. We incorporate deep learning for precise
roadside monocular global localization, reducing manual work.
Additionally, we use geolocation cues and an optimization algorithm for
improved synchronization accuracy. Our framework has been tested in
real-world scenarios, outperforming CST-based methods. It significantly
enhances large-scale roadside multi-perspective, multi-sensor spatial
synchronization, reducing deployment costs.