Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Achieving Human Level Competitive Robot Table Tennis (2408.03906v3)

Published 7 Aug 2024 in cs.RO

Abstract: Achieving human-level speed and performance on real world tasks is a north star for the robotics research community. This work takes a step towards that goal and presents the first learned robot agent that reaches amateur human-level performance in competitive table tennis. Table tennis is a physically demanding sport which requires human players to undergo years of training to achieve an advanced level of proficiency. In this paper, we contribute (1) a hierarchical and modular policy architecture consisting of (i) low level controllers with their detailed skill descriptors which model the agent's capabilities and help to bridge the sim-to-real gap and (ii) a high level controller that chooses the low level skills, (2) techniques for enabling zero-shot sim-to-real including an iterative approach to defining the task distribution that is grounded in the real-world and defines an automatic curriculum, and (3) real time adaptation to unseen opponents. Policy performance was assessed through 29 robot vs. human matches of which the robot won 45% (13/29). All humans were unseen players and their skill level varied from beginner to tournament level. Whilst the robot lost all matches vs. the most advanced players it won 100% matches vs. beginners and 55% matches vs. intermediate players, demonstrating solidly amateur human-level performance. Videos of the matches can be viewed at https://sites.google.com/view/competitive-robot-table-tennis

Citations (6)

Summary

  • The paper demonstrates a hierarchical policy architecture that combines specialized low-level controllers with a strategic high-level controller for competitive play.
  • It employs an iterative sim-to-real transfer approach by gathering extensive real-world data and enhancing simulation fidelity with the MuJoCo physics engine.
  • The results show a 45% overall win rate, with clear successes against beginners and intermediate players, underscoring promising human-level performance.

Achieving Human-Level Competitive Robot Table Tennis

The paper "Achieving Human Level Competitive Robot Table Tennis" presents an autonomous robot developed to play competitive table tennis against human opponents. This research effort is focused on reaching human-level proficiency in a high-speed interactive task, leveraging hierarchical and modular policy architectures. This sophisticated fusion of learned low-level and high-level controllers facilitates seamless sim-to-real transitions, making substantial strides towards creating competitive robot athletes.

Summary

The robotic system incorporates an ABB 1100 arm mounted on a dual-Gantry system, enabling extensive motion across a two-dimensional plane. The arm manipulates a table tennis paddle, executing precise and rapid ball returns. Key components of the system include modular low-level controllers (LLCs) and a high-level controller (HLC). The LLCs are dedicated to specialized skills such as forehand topspin or backhand targeting, while the HLC orchestrates these skills utilizing game statistics, skill descriptors, and opponent modeling capabilities.

Hierarchical Policy Architecture

The paper introduces a hierarchical architecture with multiple LLCs trained independently to handle discrete table tennis skills. These specialist policies are trained in simulation using Blackbox Gradient Sensing (BGS), which promotes smooth actions favorable to sim-to-real transfer. The HLC employs these LLCs by making strategic decisions based on observations and game dynamics—choosing between forehand and backhand styles, classifying spin types, and applying heuristics for skill selection.

Sim-to-Real Transfer and Iterative Training

An integrated approach to sim-to-real transfer was vital. The researchers built a robust dataset by iteratively gathering real-world data from human-robot interactions, ensuring the training regime accurately reflected in-game scenarios. They addressed physical dynamics modeling challenges by enhancing the fidelity of ball, paddle, and air interaction simulations using the MuJoCo physics engine. Furthermore, they utilized filmed layers and dynamic task distributions to enrich the training data, bridging the gap between simulated and real-world environments.

Adaptation and User Study

Real-time adaptation was enabled through continuous updates of H-values, which reflect the robot's learned preferences for different LLCs during matches. This adaptation allowed the robot to refine its strategies based on opponent performance. The research method included extensive user studies involving 29 players of varied skill levels, resulting in 45% match win-rate for the robot. The robot convincingly won against beginners, had a nearly even win rate against intermediate players, but failed to secure wins against advanced players.

Results and Implications

Throughout the matches, the robot showed amateur human-level performance:

  • Match Win Rates: The robot won 45% of the matches.
  • Skill Level Impact: The robot won all matches against beginners, 55% against intermediates, and none against advanced players.
  • Adaptation Analysis: The H-values highlighted that while the LLCs tailored for forehand had significant adaptive changes, the backhand improvements were less pronounced.

Future Directions

Several enhancements were proposed, including improving responses to fast and low balls, refining paddle control, and elevating the complexity of the agent's strategies for multi-shot sequences. Further, the current success of achieving human-level performance in robotics emphasizes the importance of combining system design with advanced learning architectures.

Conclusion

This paper marks a significant advancement in robotic learning and human-robot interaction, defining protocols and methodologies for future research. It lays the groundwork for the iterative enhancement of robot capabilities through meticulous real-world data collection and strategic modular policy training, ultimately steering towards autonomous systems that can rival human proficiency in dynamic and interactive environments.

The findings extend beyond the table tennis domain, offering insights into deploying learned robotic controllers for complex tasks, advancing hierarchical system designs, and foregrounding the crucial aspects of real-time adaptation in robotics.

Youtube Logo Streamline Icon: https://streamlinehq.com