Papers
Topics
Authors
Recent
2000 character limit reached

Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences

Published 31 Dec 2020 in cs.CL and cs.AI | (2012.15738v1)

Abstract: In social settings, much of human behavior is governed by unspoken rules of conduct. For artificial systems to be fully integrated into social environments, adherence to such norms is a central prerequisite. We investigate whether contemporary NLG models can function as behavioral priors for systems deployed in social settings by generating action hypotheses that achieve predefined goals under moral constraints. Moreover, we examine if models can anticipate likely consequences of (im)moral actions, or explain why certain actions are preferable by generating relevant norms. For this purpose, we introduce 'Moral Stories', a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning. Finally, we propose decoding strategies that effectively combine multiple expert models to significantly improve the quality of generated actions, consequences, and norms compared to strong baselines, e.g. though abductive reasoning.

Citations (113)

Summary

  • The paper introduces a novel dataset of branching narratives designed to evaluate goal-directed moral reasoning in social scenarios.
  • It demonstrates that grounding NLG models with rich context improves classification accuracy, as evidenced by experiments with RoBERTa and similar classifiers.
  • The study proposes Chain-of-Experts decoding strategies to enhance constraint satisfaction in ethical generation, despite observed inconsistencies in model outputs.

Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences

The paper "Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences" investigates the integration of NLG models into social environments, focusing on adherence to unspoken norms essential for moral reasoning. Through a novel dataset, the study evaluates whether contemporary models can generate action hypotheses, predict consequences, and derive norms under moral constraints within realistic social scenarios.

Structured Dataset for Goal-Oriented Social Reasoning

The research introduces a comprehensive dataset composed of structured, branching narratives devised to examine goal-directed moral reasoning. It consists of narratives divided into distinct categories such as norms, situations, intentions, actions, and consequences, both moral and immoral. The dataset aims to reflect normative and goal-oriented behavior in a contextualized manner, thus providing a ground for evaluating social reasoning in artificial systems.

Evaluation of NLG Models' Moral Reasoning

Throughout the paper, various classifiers such as RoBERTa are employed to assess models' ability to distinguish between plausible actions and consequences under different grounding settings. Results indicate that grounding information substantially improves classification accuracy, with classifiers showing adeptness at recognizing moral actions and plausible outcomes. Notably, the research highlights the classifiers' reliance on rich grounding context for higher performance.

Grounded Generative Models

Generative models like BART, T5, and GPT-2 were fine-tuned on tasks of generating actions, consequences, and norms. They were assessed through automatic metrics and human evaluation, focusing on coherence and relevance. Despite coherent generation, models showed inconsistency in adhering to moral constraints. The paper proposes Chain-of-Experts (CoE) decoding strategies leveraging strong classifiers to address constraint satisfaction in generated outputs. Figure 1

Figure 1: Example narrative included in.

Chain-of-Experts Decoding Strategies

The CoE approach employs a sequence of fine-tuned expert models to improve the relevance of generated actions, consequences, and norms. Through strategies such as ranking and abductive refinement, significant improvements in satisfying normative constraints and generating plausible consequences were noted. The CoE framework demonstrated the utility in anticipating future states for optimal decision-making in social scenarios.

Implications and Future Directions

This research underscores the necessity of grounding NLG models in rich contextual information to improve moral reasoning. The study establishes foundational methodologies for integrating ethical reasoning in AI systems deployed in social environments. Future work may focus on advancing normative discovery methods applicable beyond Western norms, and on incorporating moral reasoning frameworks into dialogue and narrative generation systems.

Conclusion

"Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences" presents novel insights into goal-oriented moral reasoning. By leveraging the new dataset for structured social narratives, it offers a rigorous evaluation of NLG models' moral reasoning capabilities and introduces advanced decoding strategies to enhance constraint satisfaction in generative outputs. Future endeavors should emphasize exploring complex moral scenarios and developing comprehensive normative discovery approaches applicable to diverse cultural contexts.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.