Combining Ontological Knowledge and Large Language Model for User-Friendly Service Robots
Abstract: Lifestyle support through robotics is an increasingly promising field, with expectations for robots to take over or assist with chores like floor cleaning, table setting and clearing, and fetching items. The growth of AI, particularly foundation models, such as LLMs and visual LLMs (VLMs), is significantly shaping this sector. LLMs, by facilitating natural interactions and providing vast general knowledge, are proving invaluable for robotic tasks. This paper zeroes in on the benefits of LLMs for "bring-me" tasks, where robots fetch specific items for users, often based on vague instructions. Our previous efforts utilized an ontology extended to handle environmental data to decipher such vagueness, but faced limitations when unresolvable ambiguities required user intervention for clarity. Here, we enhance our approach by integrating LLMs for providing additional commonsense knowledge, pairing it with ontological data to mitigate the issue of hallucinations and reduce the need for user queries, thus improving system usability. We present a system that merges these knowledge bases and assess its efficacy on "bring-me" tasks, aiming to provide a more seamless and efficient robotic assistance experience.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.