The tremendous hype around autonomous driving is eagerly calling for emerging and novel technologies to support advanced mobility use cases. As car manufactures keep developing SAE level 3+ systems to improve the safety and comfort of passengers, traffic authorities need to establish new procedures to manage the transition from human-driven to fully-autonomous vehicles while providing a feedback-loop mechanism to fine-tune envisioned autonomous systems. Thus, a way to automatically profile autonomous vehicles and differentiate those from human-driven ones is a must. In this paper, we present a fully-fledged framework that monitors active vehicles using camera images and state information in order to determine whether vehicles are autonomous, without requiring any active notification from the vehicles themselves. Essentially, it builds on the cooperation among vehicles, which share their data acquired on the road feeding a machine learning model to identify autonomous cars. We extensively tested our solution and created the NexusStreet dataset, by means of the CARLA simulator, employing an autonomous driving control agent and a steering wheel maneuvered by licensed drivers. Experiments show it is possible to discriminate the two behaviors by analyzing video clips with an accuracy of 80%, which improves up to 93% when the target state information is available. Lastly, we deliberately degraded the state to observe how the framework performs under non-ideal data collection conditions.
翻译:自动驾驶领域的巨大热潮正迫切呼唤新兴技术以支持先进的移动出行应用场景。随着汽车制造商持续开发SAE 3级以上系统以提升乘客安全性与舒适性,交通管理部门需建立新规程来管理从人工驾驶到全自动驾驶的过渡,同时提供反馈循环机制以优化预想的自动驾驶系统。因此,一种能自动分析车辆特征并区分自动驾驶与人工驾驶车辆的方法成为必需。本文提出一个完整框架,通过摄像头图像与状态信息监控行驶中的车辆,从而在不依赖车辆主动通知的情况下判定其是否为自动驾驶车辆。该框架本质上基于车辆间的协同合作——车辆共享道路采集数据并输入机器学习模型以识别自动驾驶汽车。我们利用CARLA模拟器,通过自动驾驶控制代理与持证驾驶员操纵的方向盘,广泛测试了该解决方案并创建了NexusStreet数据集。实验表明,通过分析视频片段能以80%的准确率区分两种驾驶行为,而在获取目标状态信息时准确率可提升至93%。最后,我们通过人为降低数据质量,观察了该框架在非理想数据采集条件下的表现。