Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 28 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 94 tok/s
GPT OSS 120B 476 tok/s Pro
Kimi K2 190 tok/s Pro
2000 character limit reached

Large Language Models as Corporate Lobbyists (2301.01181v7)

Published 3 Jan 2023 in cs.CL and cs.CY

Abstract: We demonstrate a proof-of-concept of a LLM conducting corporate lobbying related activities. An autoregressive LLM (OpenAI's text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of novel ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model. It outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was the state-of-the-art model on many academic natural language tasks until text-davinci-003 was recently released. The performance of text-davinci-002 is worse than the simple baseline. Longer-term, if AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. Initially, AI is being used to simply augment human lobbyists for a small portion of their daily tasks. However, firms have an incentive to use less and less human oversight over automated assessments of policy ideas and the written communication to regulatory agencies and Congressional staffers. The core question raised is where to draw the line between human-driven and AI-driven policy influence.

Citations (18)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper demonstrates that text-davinci-003 achieves a 75.1% relevance accuracy when matching legislation to corporate profiles.
  • It leverages hundreds of labeled data points, outperforming the earlier text-davinci-002 model for lobbying tasks.
  • The study highlights both the potential for automating lobbying processes and the risks of AI-driven influence on democratic governance.

LLMs as Corporate Lobbyists: A Critical Assessment

The paper "LLMs as Corporate Lobbyists" by John J. Nay presents a proof-of-concept paper on using LLMs, specifically OpenAI's text-davinci-003, for automating aspects of corporate lobbying. The research explores whether LLMs can assess the relevance of proposed U.S. Congressional bills to specific companies, provide justifications and confidence levels, and draft persuasive letters to legislators. Through comparison with previous iterations, such as text-davinci-002, the paper highlights the continuous improvement in model performance, particularly in lobbying-related tasks.

Methodology and Results

In order to evaluate the potential of LLMs in lobbying, the paper deployed text-davinci-003 to identify relevant congressional bills to companies by analyzing both the company's self-description and the content of the proposed legislation. The model was tested using hundreds of uniquely labeled data points to benchmark its relevance prediction capability, achieving an accuracy of 75.1%, better than the baseline accuracy of always predicting irrelevance (70.9%). The older model, text-davinci-002, was less effective, with an accuracy of 52.2%, suggesting text-davinci-003's superior language understanding.

The model’s capacity for confident predictions was moderately high, with a noticed improvement in accuracy to 79% when predictions with high confidence scores were considered. Additionally, the paper illustrated the use of LLMs to generate draft letters to legislators, showcasing their ability to create coherent arguments though further engineering was required for optimal results.

Implications for AI and Law

The implications of this research touch on both practical and theoretical aspects of AI deployment. Practically, automating routine lobbying tasks could reduce operational costs and make lobbying more accessible to less-financed entities, potentially democratizing influence in legislative processes. However, the deployment of LLMs in influencing law poses notable risks, particularly regarding the autonomy of AI systems. The capability of LLMs to subtly steer policy discourse away from direct human intentions could undermine the integrity of democratic legislative processes.

From a theoretical viewpoint, the alignment of AI with human societal values becomes more complex when AI is involved in the law-making process itself. Legislation serves as a codified expression of social values and preferences; thus, AI's role should ideally be limited to supporting human decision-making without impinging upon the autonomy and intentions inherent in democratic governance.

Future of AI in Policy Influence

The paper raises crucial questions about the extent to which AI should be allowed to influence public policy. As AI models continue to improve, their role in lobbying could become more significant, especially if deployed less visibly at state and local levels. This necessitates a critical discourse on setting boundaries to prevent undue AI-driven influence over democratic institutions.

Further research is required to address the ethical and practical challenges posed by AI systems capable of sophisticated natural language processing tasks. Developing robust detection mechanisms for AI-generated content is essential to ensure transparency and accountability in policy analysis and communication.

In conclusion, while LLMs offer a promising augmentation to human capabilities in tasks such as lobbying, careful consideration and safeguarding measures are crucial to maintain the integrity of democratic processes. Future developments should focus on aligning AI functionality with human values while mitigating risks associated with AI autonomy in law-making contexts.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

Github Logo Streamline Icon: https://streamlinehq.com