Robotics Thesis Proposal

  • Remote Access Enabled - Zoom
  • Virtual Presentation
  • Ph.D. Student
  • Robotics Institute
  • Carnegie Mellon University
Thesis Proposals

Safe and Resilient Multi-Robot Systems: Heterogeneity and Human Presence

In the mission of a multi-robot team, the large number of robots behave like a system that relies on networking to enable smooth information propagation and inter-robot interaction as the mission evolves in a collective fashion. Key to the success of mission operation demands for safe and reliable robot interactions within the system. As we strive to design and control such large-scale system, it often relies on the assumptions of perfect information (e.g. ground-truth state information), unconstrained inter-robot communication, and fault-free operation. However, given the scale of the system and the limited sensing and communication capabilities of robots in the real world, these assumptions do not always hold true. Uncertainty and adversaries could rapidly arise from various aspects, e.g. imperfect estimation or prediction procedures from sensors, increasing failures of robots, etc.. In the networked system, such negative effects could be easily cascading and jeopardize the entire mission. Hence, it motivates the need for formally provable safe and resilient frameworks for the system to realize the assumptions on safety and networking that provide a good grounding for the mission-oriented algorithms in the real world.

On the other hand, identifying the impact of heterogeneity in multi-robot systems is also critical when we design the networked safe multi-robot behaviors that could interact with external robot teams. For example, for multi-robot collision avoidance, it is often assumed that both parties are either using the same collision avoidance behavior in a cooperative manner, or one of them is serving as passive obstacles that are completely non-cooperative. In reality, however, various collision avoidance behaviors could be employed across different robot teams, and having adaptive safety assured behaviors addressing such heterogeneity would be beneficial with less conservative yet safe behaviors. Moving forward, as we envision a future where humans actively engage in multi-robot applications, the human presence will bring additional heterogeneity and adversarial components within the system that needs to be considered for safety concern.

In this thesis work, we are seeking to develop and validate formal multi-robot frameworks under uncertainty and adversaries to assure safe and resilient interactions among heterogeneous robots with human presence to accomplish mission goals.

In our completed work aligned with this objective, we have developed three formally provable algorithms towards the safe and resilient multi-robot coordination problem: (1) Probabilistic safety barrier certificates for collision avoidance under localization and motion uncertainty, (2) Global and subgroup connectivity maintenance for multi-robot networking, and (3) Robust and resilient multi-robot connectivity maintenance in presence of robot failures. Sharing the same unified optimization-based multi-robot control framework, we are able to define a family of minimally disruptive control constraints that assures probabilistic safety and resilient networking among homogeneous robots at all times. This characterizes the admissible action space for the multi-robot systems that are readily synthesizable to any mission-oriented controller to provide provable safety and networking guarantee. We have demonstrated the effectiveness of the three algorithms on large-scale robot team through simulations and realistic simulated platforms.

In our proposed future work, we plan to 1) further extend our work on probabilistic safety assurance under uncertainty to heterogeneous robots with human presence, and 2) investigate adversarial multi-robot herding problem with invasive human-piloted robot swarms for protecting safety-critical regions. In this context, we will employ our safety and resilient coordination controller to collectively herd adversarial swarms away from the protected zone given the safety guarantee of the algorithms. We plan to validate the algorithms in both simulations and real-world experiments on multi-robot systems with human presence.
Thesis Committee:

Katia Sycar (Chair)
Maxim Likhachev
Changliu Liu
Amanda Prorok (University of Cambridge)

Additional Proposal Information

Zoom Participation Enabled. See announcement.

For More Information, Please Contact: