Papers
Topics
Authors
Recent
Search
2000 character limit reached

Large Language Models as Corporate Lobbyists

Published 3 Jan 2023 in cs.CL and cs.CY | (2301.01181v7)

Abstract: We demonstrate a proof-of-concept of a LLM conducting corporate lobbying related activities. An autoregressive LLM (OpenAI's text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model. It outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than the simple baseline. Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. Initially, AI is being used to simply augment human lobbyists for a small portion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.

Citations (18)

Summary

  • The paper demonstrates that text-davinci-003 achieves a 75.1% relevance accuracy when matching legislation to corporate profiles.
  • It leverages hundreds of labeled data points, outperforming the earlier text-davinci-002 model for lobbying tasks.
  • The study highlights both the potential for automating lobbying processes and the risks of AI-driven influence on democratic governance.

LLMs as Corporate Lobbyists: A Critical Assessment

The paper "LLMs as Corporate Lobbyists" by John J. Nay presents a proof-of-concept study on using LLMs, specifically OpenAI's text-davinci-003, for automating aspects of corporate lobbying. The research explores whether LLMs can assess the relevance of proposed U.S. Congressional bills to specific companies, provide justifications and confidence levels, and draft persuasive letters to legislators. Through comparison with previous iterations, such as text-davinci-002, the study highlights the continuous improvement in model performance, particularly in lobbying-related tasks.

Methodology and Results

In order to evaluate the potential of LLMs in lobbying, the study deployed text-davinci-003 to identify relevant congressional bills to companies by analyzing both the company's self-description and the content of the proposed legislation. The model was tested using hundreds of uniquely labeled data points to benchmark its relevance prediction capability, achieving an accuracy of 75.1%, better than the baseline accuracy of always predicting irrelevance (70.9%). The older model, text-davinci-002, was less effective, with an accuracy of 52.2%, suggesting text-davinci-003's superior language understanding.

The model’s capacity for confident predictions was moderately high, with a noticed improvement in accuracy to 79% when predictions with high confidence scores were considered. Additionally, the paper illustrated the use of LLMs to generate draft letters to legislators, showcasing their ability to create coherent arguments though further engineering was required for optimal results.

Implications for AI and Law

The implications of this research touch on both practical and theoretical aspects of AI deployment. Practically, automating routine lobbying tasks could reduce operational costs and make lobbying more accessible to less-financed entities, potentially democratizing influence in legislative processes. However, the deployment of LLMs in influencing law poses notable risks, particularly regarding the autonomy of AI systems. The capability of LLMs to subtly steer policy discourse away from direct human intentions could undermine the integrity of democratic legislative processes.

From a theoretical viewpoint, the alignment of AI with human societal values becomes more complex when AI is involved in the law-making process itself. Legislation serves as a codified expression of social values and preferences; thus, AI's role should ideally be limited to supporting human decision-making without impinging upon the autonomy and intentions inherent in democratic governance.

Future of AI in Policy Influence

The study raises crucial questions about the extent to which AI should be allowed to influence public policy. As AI models continue to improve, their role in lobbying could become more significant, especially if deployed less visibly at state and local levels. This necessitates a critical discourse on setting boundaries to prevent undue AI-driven influence over democratic institutions.

Further research is required to address the ethical and practical challenges posed by AI systems capable of sophisticated natural language processing tasks. Developing robust detection mechanisms for AI-generated content is essential to ensure transparency and accountability in policy analysis and communication.

In conclusion, while LLMs offer a promising augmentation to human capabilities in tasks such as lobbying, careful consideration and safeguarding measures are crucial to maintain the integrity of democratic processes. Future developments should focus on aligning AI functionality with human values while mitigating risks associated with AI autonomy in law-making contexts.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.