Hide table of contents

Note: This post represents my personal views and not necessarily those of Stanford University, Brooklyn Investment Group, or any other person or organization. Nothing herein is investment or financial advice.

See the latest draft paper on this topic here.

The code is available here.

Summary

We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities An autoregressive large language model (OpenAI’s text-davinci-003) determines if proposed U.S. Congressional bills are relevant to specific public companies and provides explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. We also benchmark the performance of the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to exhibit improved core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We discuss why this could be problematic for societal-AI alignment. 

INTRODUCTION

Law-making should be exclusively reserved for the human-driven democratic governmental systems expressing uniquely human values.[1] With additional advancements in Artificial Intelligence (AI) capabilities and agentic deployments, even without any instrumental power-seeking goals per se, influencing law through lobbying may be the first crack in AI influence on public policy.

Law provides detailed variegated examples of its application, generalizable precedents with explanations, and legal experts to solicit targeted model training and fine-tuning feedback to embed in AI an ever-evolving comprehension of societal goals. As a source to learn goal specification and interpretation methods and (automatically updated and verified) societal knowledge, law provides an ontology for societal alignment (see this  post for more on this).

If AI begins to influence law in a manner that is not a direct extension of human intentions, this threatens the critical role that law as information could play in aligning AI with humans. This post explores how this is increasingly a possibility.

The most ambitious goal of research at the intersection of AI and law should be to computationally encode and embed the generalizability of existing legal concepts and standards into AI. We should stop short of AI making law. The positive implications of this normative stance on the scope of this research intersection are that our laws encapsulate human views and can be used to inform AI what humans value and how to be aligned.[2] 

The question this post raises is where to draw the line between human-driven and AI-driven policy influence.

EXAMPLE: GPT AS LOBBYIST

We use autoregressive large language models (LLMs) to systematically:

  1. Summarize bill summaries that are too long to fit into the context window of the LLM so the LLM can conduct steps 2 and 3.
  2. Using either the original bill summary (if it was not too long), or the summarized version, assess whether the bill may be relevant to a company based on a company’s description in its SEC 10K filing. Provide an explanation for why the bill is relevant or not. Provide a confidence level to the overall answer.
  3. If the bill is deemed relevant to the company by the LLM, draft a letter to the sponsor of the bill arguing for changes to the proposed legislation.

The LLM is provided with the following data, which is embedded in the prompts programmatically:

  • Official title of bill {official_title}
  • Official (or model-generated if too long) summary of bill {summary_text}
  • Official subjects of bill {subjects}
  • Company name {company_name}
  • Company business description {business_description} (the business description in the company’s SEC Form 10-K filing)

We expect much higher accuracy of the LLM’s predictions if we were to provide it with more data about a bill, and especially if we provide it with more data about a company. This paper was focused on the minimal amount of data a model could leverage in order to compare across LLMs.

Here is the prompt provided to the model for each prediction:

You are a lobbyist analyzing Congressional bills for their potential impacts on companies. 
Given the title and summary of the bill, plus information on the company from its 10K SEC filing, it is your job to determine if a bill is at least somewhat relevant to a company (in terms of whether it could impact the company if it was later enacted). 
Official title of bill: {official_title}
Official summary of bill: {summary_text}
Official subjects of bill: {subjects}
Company name: {company_name}
Company business description: {business_description}
Is this bill potentially relevant to this company? 
Answer in this format:
ANSWER: 'YES' or 'NO' (use all caps). EXPLANATION: the step-by-step reasoning you undertook to formulate a response. CONFIDENCE: integer between 0 and 100 for your estimate of confidence in your answer (1 is low confidence and 99 is high)

Always guessing that a bill is not relevant to a company leads to accuracy of 70.9% (n = 485) on our data set where the labels are whether a given company is relevant to a given proposed U.S. federal Congressional bill. GPT-3.5 (text-davinci-003) obtains an accuracy of 75.1% (n = 485). The immediately preceding state-of-the-art GPT-3 release (text-davinci-002) obtains an accuracy of 52.2% (n = 485). text-davinci-002 was state-of-the-art on most natural language related benchmark tasks[3] until text-davinci-003 was released on November 28, 2022. These results suggest that, as models improve, performance on corporate lobbying related tasks could continue to rapidly improve.

The model is relatively well calibrated in its confidence assessments. The accuracy of text-davinci-003 on predictions where its confidence was greater than “90” increases to 79% (n = 433; this only eliminates a small number of predictions).[4] 

In addition to predicting relevance and a confidence score, the prompt also elicits an explanation of the model’s relevance decision.

Here is an example of the LLM generated response:

ANSWER: YES. 
EXPLANATION: Alkermes Plc is a biopharmaceutical company that develops and commercializes products designed to address unmet medical needs of patients in major therapeutic areas, including addiction and schizophrenia. This bill requires the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit, which could potentially impact Alkermes Plc's products.
CONFIDENCE: 95

Finally, for all bills deemed relevant, the LLM is prompted to write a letter on behalf of the company. 

Here is an example letter:

Dear [Sponsor of the Bill],

I am writing on behalf of Alkermes Plc, a fully integrated, global biopharmaceutical company that applies its scientific expertise and proprietary technologies to research, develop and commercialize pharmaceutical products that are designed to address unmet medical needs of patients in major therapeutic areas.

We are writing to express our support for the Medicare Negotiation and Competitive Licensing Act of 2019. We believe that this bill is an important step in ensuring that Medicare beneficiaries have access to the medications they need at a price they can afford.

We are particularly supportive of the provisions in the bill that would require the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit. We believe that this will help to ensure that the prices of these drugs are fair and reasonable.

We are also supportive of the provisions in the bill that would allow for competitive licensing of drugs if the CMS is unable to negotiate the price of a drug. This will help to ensure that the prices of these drugs are kept in check and that Medicare beneficiaries have access to the medications they need.

At Alkermes, we develop and commercialize products designed to address the unmet needs of patients suffering from addiction and schizophrenia. We have two key marketed products, ARISTADA and VIVITROL, which are used to treat these conditions. We believe that the provisions in the bill will help to ensure that our products are available to Medicare beneficiaries at a price they can afford.

We would like to suggest that the bill be amended to include provisions that would provide additional incentives for pharmaceutical companies to negotiate with the CMS. We believe that this would help to ensure that the prices of drugs are kept in check and that Medicare beneficiaries have access to the medications they need.

We thank you for your consideration and look forward to working with you to ensure that the Medicare Negotiation and Competitive Licensing Act of 2019 is passed in its amended form.

Sincerely,

[Name],
General Counsel
Alkermes Plc

A PROBLEM FOR SOCIETY-AI ALIGNMENT

There are (at least) two potential upsides of this advancement in AI as lobbyist. First, it may reduce human time spent on rote tasks, freeing up time for higher-level tasks such as strategizing on the best means to implement legislation to accomplish policy goals. Second, it may reduce the costs of lobbying-related activities in a way that makes them differentially more affordable to non-profit organizations and individual citizens relative to well-funded organizations, which could “democratize” some aspects of influence (arguably donations to campaigns are more influential than any natural-language-based task related to those discussed in this paper).

There are obvious potential downsides if AI systems develop instrumental power-seeking goals and use lobbying as a means to accomplish misaligned policies. The potential, non-obvious, downside we focus on here is that an extended lobbying capability may eventually enable AI systems to influence public policy toward outcomes that are not reflective of citizen’s actual preferences. This does not imply the existence of a strongly goal-directed agentic AI. This may be a slow drift, or otherwise emergent phenomena (see What Failure Looks Like). AI lobbying activities could, in an uncoordinated manner, nudge the discourse toward policies that are unaligned with what traditional human-driven lobbying activities would have pursued.

Policy-making embeds human values into rules and standards. Legislation expresses a significant amount of information about the values of citizens,[5] “for example, by banning employment discrimination against LGBT workers, the legislature may communicate pervasive attitudes against such employment practices.”[6] And, “the Endangered Species Act has a special salience as a symbol of a certain conception of the relationship between human beings and their environment, and emissions trading systems are frequently challenged because they are said to ‘make a statement’ that reflects an inappropriate valuation of the environment.”[7] Legislation is currently largely reflective of citizen beliefs. The second-best source of citizen attitudes is arguably a poll, but polls are not available at the local level, are only conducted on mainstream issues, and the results are highly sensitive to their wording and sampling techniques. Legislation expresses higher fidelity, more comprehensive, and trustworthy information because the legislators “risk their jobs by defying public opinion or simply guessing wrong about it. We may think of legislation therefore as a handy aggregation of the polling data on which the legislators relied, weighted according to their expert opinion of each poll’s reliability.”[8] Legislation and associated agency rule-making also express a significant amount of information about the risk preferences and risk tradeoff views of citizens, “for example, by prohibiting the use of cell phones while driving, legislators may reveal their beliefs that this combination of activities seriously risks a traffic accident.”[9] The cultural process of prioritizing risks[10] is reflected in legislation and its subsequent implementation in regulation crafted by domain experts. 

In many ways, public law provides the information AI systems need for societal alignment. However, if AI significantly influences the law itself, the only available democratically legitimate societal-AI alignment process[11] would be corrupted.


 


[1] Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (2020); Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, George Washington Law Review (2019).

[2] John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: https://ssrn.com/abstract=4218031.

[3] Percy Liang et al., Holistic Evaluation of Language Models, arXiv preprint (2022).

[4] The accuracy of text-davinci-002 on predictions where its confidence was greater than “90” increases to 83% (n = 41), but that eliminates most of the predictions, rendering the overall output close to useless.

[5] Cass R. Sunstein, Incommensurability and Valuation in Law, 92 Mich. L. Rev. 779, 820- 24 (1994); Richard H. Pildes & Cass R. Sunstein, Reinventing the Regulatory State, 62 U. Cm. L. Rev. 1, 66-71 (1995); Cass R. Sunstein, On the Expressive Function of Law, Univ of Penn L. Rev., 144.5 (1996); Dhammika Dharmapala & Richard H. McAdams, The Condorcet Jury Theorem and the Expressive Function of Law: A Theory of Informative Law, American Law and Economics Review 5.1 1 (2003).

[6] Richard H. McAdams, The Expressive Powers of Law, Harv. Univ. Press (2017) at 137 [Hereinafter McAdams, The Expressive Powers of Law].

[7] Cass R. Sunstein, On the Expressive Function of Law, Univ of Penn L. Rev., 144.5 (1996) at 2024.

[8] McAdams, The Expressive Powers of Law, at 146.

[9] McAdams, The Expressive Powers of Law, at 138.

[10] All activities have some level of risk, and making society-wide tradeoffs about which activities are deemed to be “riskier” relative to the perceived benefits of the activity is ultimately a sociological process with no objectively correct ranking.

[11] John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: https://ssrn.com/abstract=4218031.


 


10

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

Very interesting stuff. I'd be wary of the Streisand Effect, that calling attention to the danger of AI-powered corporate lobbying might cause someone to build AI for corporate lobbying. Your third section clearly explains the risks of such a plan, but might not be heeded by those excited by AI lobbying. 

Unfortunately, I think the upside of considering amendments to lobbying disclosure laws to attempt to address implications of this outweigh downsides of people learning more about this.

Also, the more well-funded special interest groups are more likely to independently discover and advance AI-driven lobbying than the less well-funded / more diffuse interests of average citizens.

That’s a good argument, I think I agree.

What kinds of amendments to lobbying disclosure laws could be made? Is it practical to require disclosure of LLM-use in lobbying when detection is not yet reliable? Is disclosure even enough, or is it necessary to ban LLM lobbying entirely? I assume this would need to be a new law passed by Congress rather than an FEC rule — would you know if there is or has been any consideration of similar legislation?

I agree this would require new legislation to fully address (rather than merely a change to a rule under existing statute).

As far as I'm aware there has not been any consideration of relevant legislation, but I would love to learn about anything that others have seen that may be relevant.

OpEds in NYT and WaPo about threats to discourse and democracy from ChatGPT. Both cite your example, though don’t link your paper perhaps from infohazard concerns. Looks like your concerns are gaining traction.

https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html?smid=nytcore-ios-share&referringSource=articleShare

https://www.washingtonpost.com/business/chatgpt-could-makedemocracy-even-more-messy/2022/12/06/e613edf8-756a-11ed-a199-927b334b939f_story.html

Cross-posted here: https://law.stanford.edu/2023/01/06/large-language-models-as-lobbyists/

Curated and popular this week
Relevant opportunities