Google has said that Britain risks being left behind in the global artificial intelligence race unless the government moves quickly to build more datacentres and let tech companies use copyrighted work in their AI models.
The company pointed to research showing that the UK is ranked seventh on a global AI readiness index for data and infrastructure, and called for a number of policy changes.
Google’s UK managing director, Debbie Weinstein, said that the government “sees the opportunity” in AI but needs to introduce more policies boosting its deployment.
“We have a lot of advantages and a lot of history of leadership in this space, but if we do not take proactive action, there is a risk that we will be left behind,” she said.
AI is undergoing a global investment boom after breakthroughs in the technology led by the release of the ChatGPT chatbot, from the US company OpenAI, and other companies like Google, which has produced a powerful AI model called Gemini.
However, government-backed AI projects have been early victims of cost-cutting by Keir Starmer’s government. In August, Labour confirmed it would not push ahead with unfunded commitments of £800m for the creation of an exascale supercomputer – considered key infrastructure for AI research – and a further £500m for the AI Research Resource, which funds computing power for AI.
Asked about the supercomputer decision, Weinstein referred to the government’s forthcoming “AI action plan” under the tech entrepreneur Matt Clifford. “We’re hopeful to see a really comprehensive view around what are the investments that we need to make in the UK,” she said.
Google has outlined its UK policy suggestions in a document called “unlocking the UK’s AI potential”, whichwill be released this week, in which it recommends the creation of a “national research cloud”, or a publicly funded mechanism for providing computing power and data – two key factors in building the AI models behind products such as ChatGPT – to startups and academics.
The report adds that the UK “struggles to compete with other countries for data centre investment” and welcomes Labour’s commitment to build more of the centres as it prepares to introduce a new planning and infrastructure bill.
Other recommendations in the Google report include setting up a national skills service to help the workforce adapt to AI, and introducing the technology more widely into public services.
It also calls for changes to UK copyright laws after the abandonment this year of attempts to draft a new code for using copyrighted material to train AI models failed.
Data from copyright-protected material such as news articles and academic papers is seen as vital for models that underpin tools like chatbots, which are “trained” on billions of words that allow them to understand text-based prompts and predict the right response to them. The same concerns apply to models that make music or images.
The Google document calls for the relaxation of restrictions on a practice known as text and data mining (TDM), where copying of copyrighted work is allowed for non-commercial purposes such as academic research.
The Conservative government dropped plans to allow TDM for commercial purposes in 2024, amid deep concerns from the creative industries and news publishers.
“The unresolved copyright issue is a block to development, and a way to unblock that, obviously, from Google’s perspective, is to go back to where I think the government was in 2023 which was TDM being allowed for commercial use,” said Weinstein.
The report also calls for “pro-innovation” regulation, signalling support for the regulatory setup that is in place, where oversight of AI is managed by various public regulators including the Competition and Markets Authority and the Information Commissioner’s Office.
“We would encourage the government to continue looking first to the existing regulation, as opposed to creating new regulation,” said Weinstein.
UK ministers are in the process of drafting a consultation on an AI bill that is reportedly focused on making a voluntary AI model testing agreement between the UK government and tech companies legally binding, as well as making the UK’s AI Safety Institute an arm’s length government body.
The Department for Science, Innovation and Technology was contacted for comment.
Source link
lol