EY Survey: US Election Will Have a Big Impact On Tech

EY Survey: US Election Will Have a Big Impact On Tech


The 2024 presidential election will surely have far-reaching consequences in many areas — and artificial intelligence is no exception.

EY’s latest technology pulse poll, published in October, revealed that 74% of 503 tech leaders expect the election to impact AI regulation and global competitiveness. Although tech leaders said they plan to significantly increase AI investments in the next year, future growth of AI may hinge on the outcome of the election.

Respondents believe the outcome of the election would most impact regulation related to cybersecurity/data protections, AI and machine learning, and user data and content oversight.

“Of course, all of these are closely tied to innovation, growth and global competitiveness,’’ James Brundage, EY global & Americas technology sector leader, told TechRepublic. “The U.S. is the world’s tech innovation leader, so future tech policy should strike a balance that supports U.S. innovation while establishing guardrails where they are needed,” such as in data privacy, children’s online safety, and national security.

SEE: Year-round IT budget template (TechRepublic Premium)

Greater investments in AI

Notably, tech companies will continue to make significant investments in AI regardless of the outcome of the presidential election, according to the survey. However, the result may impact the direction of fiscal, tax, tariff, antitrust, and regulatory policies as well as interest rates, mergers and acquisitions, initial public offerings, and AI regulations, the survey said.

“We were surprised that trade/tariffs were not higher up on the minds of these executives,’’ Brundage observed.

On the heels of a sluggish tech market in 2024, he said that “the 2025 trajectory is bullish, as companies focus on raising capital to invest in growth and emerging technologies like AI.”

The majority of tech leaders (82%) said their company plans to increase AI investments by 50% or more in the next year. In the next year, AI investments will focus on key areas including AI-specific talent (60%), cybersecurity (49%), and back-office functions (45%).

With an eye on innovation, most tech industry leaders surveyed also plan to allocate resources toward AI investments in the next six to 12 months, with 78% of tech leaders reporting their company is considering divesting non-core assets or businesses as part of their growth strategy during that time.

Big organizations struggling with AI initiatives

Brundage also finds it surprising that 63% of tech leaders report their organization’s AI initiatives have successfully moved to the implementation phase.

“That number seems high, but several factors could explain it,’’ he noted. “First, companies may be focusing on short-term, low-hanging fruit AI projects, which are easier to implement, have higher success rates, but may not be the opportunities with maximum impact.”

Further, use of “quick-buy solutions like ChatGPT or Copilot, which are relatively simple to deploy and drive productivity, may inflate this percentage.” Also, successful implementation “likely means moving from proof of concept (POC) to implementation,” Brundage said, adding that “real challenges such as data quality, scaling, governance, and infrastructure still lie ahead.”

Additionally, size matters — the report observed that organizations with more employees are finding less success moving AI initiatives to the implementation phase.

Data quality issues (40%) and talent/skills shortages (34%) are the most common reasons for AI initiatives failing to progress to the next stage, according to those who indicated that fewer than half of their AI initiatives have been implemented successfully.

How the election’s impact on AI could be felt

Regardless of who takes office in 2025, there could be a continuation of current regulatory and enforcement trends related to AI given that the Federal Trade Commission and Department of Justice have been very active and may remain so, Brundage said. Given that “some legislative proposals are bipartisan … we expect that they will advance in 2025 or 2026,” such as children’s online safety.

But he pointed out that state legislatures and attorneys general also impact policy, “so it’s a nuanced playing field. We expect these changes to be measured in years, not months.”

Tech leaders must realize that the U.S. is experiencing a new geopolitical environment compared with five to 10 years ago, Brundage said.

“New government industrial policy in the U.S. and around the globe is driving business action — both in the tech sector and in the industries and supply chains that it relies upon. These global tech businesses are particularly at the forefront of geopolitics as countries seek to de-risk from one another.”

AI capabilities have also become highly competitive and geopolitically significant across the globe, he said. “There is a dual race to innovate and regulate here in the U.S. and elsewhere. We see a need to have business models that account for the different regulatory approaches like sovereign frontier models.”

Wanted: AI tech talent search intensifies

As organizations continue to integrate more AI functionality into their businesses, the need to hire AI-specific talent will increase, as well as the need to restructure or reduce headcount from legacy job functions, according to the survey.

Eighty percent of tech leader respondents foresee reducing or restructuring headcount from legacy functions to other in-demand functions, and 77% anticipate an increase in hiring for AI-specific talent, according to the survey. Additionally, 40% of technology leaders said human capital efforts such as training will be the focus of their company’s AI investments next year.

AI’s impact on national security and foreign policy

Meanwhile, the Biden administration on Thursday released the first-ever AI-focused national security memorandum (NSM) to ensure that the U.S. continues to lead in the development and deployment of AI technologies. The memorandum also prioritizes how the country adopts and uses AI while preserving privacy, human rights, civil rights, and civil liberties so the technology can be trusted.

The NSM also calls for the creation of a governance and risk management framework for how agencies implement AI and requiring them to monitor, assess, and mitigate AI risks related to those issues.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.