Tech policy is only frustrating 90% of the time

Tech policy is only frustrating 90% of the time



Many technologists stay far away from public policy. That’s understandable. In our experience, most of the time when we engage with policymakers there is no discernible impact. But when we do make a difference to public policy, the impact is much bigger than what we can accomplish through academic work. So we find it fruitful to engage even if it feels frustrating on a day-to-day basis.

In this post, we summarize some common reasons why many people are cynical about tech policy and explain why we’re cautiously optimistic. We also announce some recent writings on tech policy as well as an upcoming event for policymakers in Washington D.C., called AI Policy Precepts.

Some people want more tech regulation and others want less. But both sides seem to mostly agree that policymakers are bad at regulating tech: because they don’t have tech expertise; or because tech moves too rapidly for law to keep up; or because policymakers are bad at anticipating the effects of regulation. 

While these claims have a kernel of truth, they aren’t reasons for defeatism. It’s true that most politicians don’t have deep technical knowledge. But their job is not to be subject matter experts. The details of legislation are delegated to staffers, many of whom are experts on the subject. Moreover, much of tech policy is handled by agencies such as the Federal Trade Commission (FTC), which do have tech experts on their staff. There aren’t enough, but that’s being addressed in many ways. Finally, while federal legislators and agencies get the most press, a lot happens on the state and local levels.

Besides, policy does not have to move at the speed of tech. Policy is concerned with technology’s effect on people, not the technology itself. And policy has longstanding approaches to protecting humans that can be adapted to address new challenges from tech. For example, the FTC has taken action in response to deceptive claims made by AI companies under its existing authority. Similarly, the answer to AI-enabled discrimination is the enforcement of long-established anti-discrimination law. Of course, there are some areas where technology poses new threats, and that might require changes to laws, but that’s relatively rare.

In short, there is nothing exceptional about tech policy that makes it harder than any other type of policy requiring deep expertise. If we can do health policy or nuclear policy, we can do tech policy. Of course, there are many reasons why all public policy is slow and painstaking, such as partisan gridlock, or the bias towards inaction built into the structure of the government due to checks and balances. But none of these factors are specific to tech policy.

To be clear, we are not saying that all regulations or policies are useful—far from it. In past essays, we have argued against specific proposals for regulating AI. And there’s a lot that can be accomplished without new legislation. The October 2023 Executive Order by the Biden administration tasked over 50 agencies with 150 actions, showing the scope of existing executive authority.

We work at Princeton’s Center for Information Technology Policy. CITP is home to interdisciplinary researchers who look at tech policy from different perspectives. We have also begun working closely with the D.C. office of Princeton’s School of Public and International Affairs. Recently, we have been involved in a few collaborations on informing tech policy:

Foundation model transparency reports: In a Stanford-MIT-Princeton collaboration, we propose a structured way for AI companies to release key information about their foundation models. We draw inspiration from transparency reporting in social media, financial reporting, and FDA’s adverse event reporting. We use the set of 100 indicators developed in the 2023 Foundation Model Transparency Index.

We analyze how the 100 indicators align with six existing proposals on AI: Canada’s Voluntary Code of Conduct for generative AI, the EU AI Act, the G7 Hiroshima Process Code of Conduct for AI, the U.S. Executive Order on AI, the U.S. Foundation Model Transparency Act, and the U.S. White House voluntary AI commitments. 43 of the 100 indicators in our proposal are required by at least one proposal, with the EU AI Act requiring 30 of the 100 proposed indicators. 

We also found that transparency requirements in government policies can lack specificity: they do not detail how precisely developers should report quantitative information, establish standards for reporting evaluations, or account for differences across modalities. We provide an example of what Foundation Model Transparency Reports could look like to help sharpen what information AI developers must provide. Read the paper here. 

New Jersey Assembly hearing on deepfakes: Last month, Sayash testified before the New Jersey Assembly on reducing harm from deepfakes. We were asked to provide our opinion on four bills creating penalties and mitigations for non-consensual deepfakes. The hearing included testimonies from four experts in intellectual property, tech policy, civil rights, and constitutional law. 

We advocated for collecting better evidence on the impact of AI-generated deepfakes, content provenance standards to help prove that a piece of media is human-created (as opposed to watermarking to prove it is AI-generated), and bolstering defenses on downstream surfaces such as social media. We also cautioned against relying too much on the non-proliferation of powerful AI as a solution—as we’ve argued before, it is likely to be infeasible and ineffective. Read the written testimony here.

Open models and open research: We submitted a response to the National Telecommunications and Information Administration on its request for comments on openness in AI, in collaboration with various academic and civil society members. Our response built on our paper and policy brief analyzing the societal impact of open foundation models. We were happy to see this paper being cited in responses by several industry and civil society organizations, including the Center for Democracy and Technology, Mozilla, Meta, and Stability AI. Read our response here.

We also contributed to a comment to the copyright office in support of a safe harbor exemption for generative AI research, based on our paper and open letter (signed by over 350 academics, researchers, and civil society members). Read our comment here.

AI safety and existential risk. We’ve analyzed several aspects of AI safety in our recent writing: the impact of openness, the need for safe harbors, and the pitfalls of model alignment. Another major topic of policy debate is on the existential risks posed by AI. We’ve been researching this question for the past year and plan to start writing about it in the next few weeks.

AI policy precepts. CITP has launched a non-partisan program to explore the core concepts, opportunities, and risks underlying AI that will shape federal policy making for the next ten years. The sessions will be facilitated by Arvind alongside CITP colleagues Matthew Salganik and Mihir Kshirsagar. The size is limited to about 18 participants, with policymakers drawn from Congressional offices and Federal agencies. We will explore predictive and generative AI, moving beyond familiar talking points and examining real world case studies. Participants will come away with frameworks to address future challenges, as well as the opportunity to establish relationships with a cohort of policymakers. See here for more information and here to nominate yourself or a colleague. The deadline for nomination is this Friday, April 5.

We thank Mihir Kshirsagar for feedback on a draft of this post. Link to cover image: Source



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.