Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Come the new year, the incoming Trump administration is expected to make many changes to existing policies, and AI regulation will not be exempt. This will likely include repealing an AI executive order by current President Joe Biden.
The Biden order established government oversight offices and encouraged model developers to implement safety standards. While the Biden AI executive order rules focus on model developers, its repeal could present some challenges for enterprises to overcome. Some companies, like Trump-ally Elon Musk’s xAI, could benefit from a repeal of the order, while others are expected to face some issues. This could include having to deal with a patchwork of regulations, less open sharing of data sources, less government-funded research and more emphasis on voluntary responsible AI programs.
Patchwork of local rules
Before the EO’s signing, policymakers held several listening tours and hearings with industry leaders to determine how best to regulate technology appropriately. Under the Democratic-controlled Senate, there was a strong possibility AI regulations could move forward, but insiders believe the appetite for federal rules around AI has cooled significantly.
Gaurab Bansal, executive director of Responsible Innovation Labs, said during the ScaleUp: AI conference in New York that the lack of federal oversight of AI could lead states to write their policies.
“There’s a sense that both parties in Congress will not be regulating AI, so it will be states who may run the same playbook as California’s SB 1047,” Bansal said. “Enterprises need standards for consistency, but it’s going to be bad when there’s a patchwork of standards in different areas.”
California state legislators pushed SB 1047 — which would have mandated a “kill switch” to models among other government controls — with the bill landing on Gov. Gavin Newsom’s desk. Newsom’s veto of the bill was celebrated by industry luminaries like Meta’s Yann Le Cunn. Bansal said states are more likely to pass similar bills.
Dean Ball, a research fellow at George Mason University’s Mercatus Center, said companies may have difficulty navigating different regulations.
“Those laws may well create complex compliance regimes and a patchwork of laws for both AI developers and companies hoping to use AI; how a Republican Congress will respond to this potential challenge is unclear,” Ball said.
Voluntary responsible AI
Industry-led responsible AI has always existed. However, the burden on companies to be more proactive in being accountable and fair may heighten because their customers demand a focus on safety. Model developers and enterprise users should spend time implementing responsible AI policies and building standards that meet laws like the European Union’s AI Act.
During the ScaleUp: AI conference, Microsoft Chief Product Officer for Responsible AI Sarah Bird said many developers and their customers, including Microsoft, are readying their systems for the EU’s AI act.
But even if no sprawling law governs AI, Bird said it’s always good practice to bake responsible AI and safety into the models and applications from the onset.
“This will be helpful for start-ups, a lot of the high level of what the AI act is asking you to do is just good sense,” Bird said. “If you’re building models, you should govern the data going into them; you should test them. For smaller organizations, compliance becomes easier if you’re doing it from scratch, so invest in a solution that will govern your data as it grows.”
However, understanding what is in the data used to train large language models (LLMs) that enterprises use might be harder. Jason Corso, a professor of robotics at the University of Michigan and a co-founder of computer vision company Voxel51, told VentureBeat the Biden EO encouraged a lot of openness from model developers.
“We can’t fully know the impact of one sample on a model that presents a high degree of potential bias risk, right? So model users’ businesses could be at stake if there’s no governance around the use of these models and the data that went in,” Corso said.
Fewer research dollars
AI companies enjoy significant investor interest right now. However, the government has often supported research that some investors feel is too risky. Corso noted that the new Trump administration might choose not to invest in AI research to save on costs.
“I just worry about not having the government resources to put it behind those types of high-risk, early-stage projects,” Corso said.
However, a new administration does not mean money will not be allocated to AI. While it’s unclear if the Trump administration will abolish the newly created AI Safety Institute and other AI oversight offices, the Biden administration did guarantee budgets until 2025.
“A pending question that must color Trump’s replacement for the Biden EO is how to organize the authorities and allocate the dollars appropriated under the AI Initiative Act. This bill is the source for many of the authorities and activities Biden has tasked to agencies such as NIST and funding is set to continue in 2025. With these dollars already allocated, many activities will likely continue in some form. What that form looks like, however, has yet to be revealed,” Mercatus Center research fellow Matt Mittelsteadt said.
We’ll know how the next administration sees AI policy in January, but enterprises should prepare for whatever comes next.
Source link lol