Think tank calls for AI incident reporting system

Think tank calls for AI incident reporting system


The Centre for Long-Term Resilience (CLTR) has called for a comprehensive incident reporting system to urgently address a critical gap in AI regulation plans.

According to the CLTR, AI has a history of failing in unexpected ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since 2014. As AI becomes more integrated into society, the frequency and impact of these incidents are likely to increase.

The think tank argues that a well-functioning incident reporting regime is essential for effective AI regulation, drawing parallels with safety-critical industries such as aviation and medicine. This view is supported by a broad consensus of experts, as well as the US and Chinese governments and the European Union.

The report outlines three key benefits of implementing an incident reporting system:

  1. Monitoring real-world AI safety risks to inform regulatory adjustments
  2. Coordinating rapid responses to major incidents and investigating root causes
  3. Identifying early warnings of potential large-scale future harms

Currently, the UK’s AI regulation lacks an effective incident reporting framework. This gap leaves the Department for Science, Innovation & Technology (DSIT) without visibility on various critical incidents, including:

  • Issues with highly capable foundation models
  • Incidents from the UK Government’s own AI use in public services
  • Misuse of AI systems for malicious purposes
  • Harms caused by AI companions, tutors, and therapists

The CLTR warns that without a proper incident reporting system, DSIT may learn about novel harms through news outlets rather than through established reporting processes.

To address this gap, the think tank recommends three immediate steps for the UK Government:

  1. Government incident reporting system: Establish a system for reporting incidents from AI used in public services. This can be a straightforward extension of the Algorithmic Transparency Recording Standard (ATRS) to include public sector AI incidents, feeding into a government body and potentially shared with the public for transparency.
  2. Engage regulators and experts: Commission regulators and consult with experts to identify the most concerning gaps, ensuring effective coverage of priority incidents and understanding stakeholder needs for a functional regime.
  3. Build DSIT capacity: Develop DSIT’s capability to monitor, investigate, and respond to incidents, potentially through a pilot AI incident database. This would form part of DSIT’s central function, initially focusing on the most urgent gaps but eventually expanding to include all reports from UK regulators.

These recommendations aim to enhance the government’s ability to responsibly improve public services, ensure effective coverage of priority incidents, and develop the necessary infrastructure for collecting and responding to AI incident reports.

As AI continues to advance and permeate various aspects of society, the implementation of a robust incident reporting system could prove crucial in mitigating risks and ensuring the safe development and deployment of AI technologies.

See also: SoftBank chief: Forget AGI, ASI will be here within 10 years

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , , , , ,



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.