Talks

Quality Assurance for Autonomous Driving Systems: A Software Engineering Perspective

June 02, 2024

Tutorial Talk, Autoware Tutorial, 2024 IEEE Intelligent Vehicles Symposium (IV 2024), Jeju Island, Korea

Quality assurance for Autonomous Driving Systems (ADS) has been long recognized as a notoriously challenging but crucial task which requires substantial domain-specific knowledge and engineering efforts to bridge the last gap of further deploying the state-of-the-art ADS methodologies to safety, reliability and security-concerned practical applications. Therefore, in this tutorial, we would like to provide a high-level overview of our work in advancing the quality assurance of ADS. This tutorial aims to introduce the solutions and frameworks to tackle the quality challenges of ADS from two aspects: 1) a complete quality analysis pipeline for AI components in ADS, from unit level to system level, and 2) a series of quality assurance frameworks for AI-enabled Cyber-physical Systems (CPS) specialized for ADS. In particular, this first part will present the works for quality analysis of ADS, including robustness benchmarking of AI-enabled sensor fusion systems, testing of simulation- based ADS, risk assessment from data distribution and uncertainty, and repair methods for AI components. The second part will summarize the works of trustworthy ADS from the CPS perspective, including the CPS benchmark, ensemble method for AI-controller fusion, AI-aware testing methods and LLM-enabled approaches for planning and design of AI components. The third part will introduce the recent advances in applying LLM for autonomous driving, including taking LLM-centric decision-making using language as an interface and opportunities in applying LLM for cross-modal test generation.

ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning

May 14, 2024

Conference Presentation, 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan

Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.

Safe Reinforcement Learning with Model Order Reduction Techniques

September 21, 2022

Invited Talk, The 1st International Workshop on Safe Reinforcement Learning Theory and its Applications, 2022 IEEE International Conference on Multi-sensor Fusion and Integration (MFI 2022), Cranfield, United Kingdom

Although the state-of-the-art learning approaches exhibit impressive results for dynamical systems, only a few applications on real physical systems have been presented. One major impediment is that the intermediate policy during the training procedure may result in behaviors that are not only harmful to the system itself but also to the environment. In essence, imposing safety guarantees for learning algorithms is vital for autonomous systems acting in the real world. In this talk, we discuss a computationally effective safe reinforcement learning (SRL) framework for complex dynamical systems that is based on model order reduction (MOR) techniques. With a proper definition of the safe region, a supervisory control strategy, which switches the actions applied on the system between the learning-based controller and a predefined corrective controller, is given. A simplified system, found by either using physically inspired or data-driven MOR, formulates a low-dimensional safe region that approximates the high-dimensional safe region of the original dynamical system. To ensure the performance of the learning-based controller, the belief of the safe region is updated online by using the observed actual system’s behavior. The proposed SRL framework leads to a safer learning process, and provides a possible solution to the challenging problem of how to safely apply learning algorithms in real-world scenarios.