Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
As research and adoption of artificial intelligence continue to advance at an accelerating pace, so do the risks associated with using AI. To help organizations navigate this complex landscape, researchers from MIT and other institutions have released the AI Risk Repository, a comprehensive database of hundreds of documented risks posed by AI systems. The repository aims to help decision-makers in government, research and industry in assessing the evolving risks of AI.
Bringing order to AI risk classification
While numerous organizations and researchers have recognized the importance of addressing AI risks, efforts to document and classify these risks have been largely uncoordinated, leading to a fragmented landscape of conflicting classification systems.
“We started our project aiming to understand how organizations are responding to the risks from AI,” Peter Slattery, incoming postdoc at MIT FutureTech and project lead, told VentureBeat. “We wanted a fully comprehensive overview of AI risks to use as a checklist, but when we looked at the literature, we found that existing risk classifications were like pieces of a jigsaw puzzle: individually interesting and useful, but incomplete.”
The AI Risk Repository tackles this challenge by consolidating information from 43 existing taxonomies, including peer-reviewed articles, preprints, conference papers and reports. This meticulous curation process has resulted in a database of more than 700 unique risks.
The repository uses a two-dimensional classification system. First, risks are categorized based on their causes, taking into account the entity responsible (human or AI), the intent (intentional or unintentional), and the timing of the risk (pre-deployment or post-deployment). This causal taxonomy helps to understand the circumstances and mechanisms by which AI risks can arise.
Second, risks are classified into seven distinct domains, including discrimination and toxicity, privacy and security, misinformation and malicious actors and misuse.
The AI Risk Repository is designed to be a living database. It is publicly accessible and organizations can download it for their own use. The research team plans to regularly update the database with new risks, research findings, and emerging trends.
Evaluating AI risks for the enterprise
The AI Risk Repository is designed to be a practical resource for organizations in different sectors. For organizations developing or deploying AI systems, the repository serves as a valuable checklist for risk assessment and mitigation.
“Organizations using AI may benefit from employing the AI Risk Database and taxonomies as a helpful foundation for comprehensively assessing their risk exposure and management,” the researchers write. “The taxonomies may also prove helpful for identifying specific behaviors which need to be performed to mitigate specific risks.”
For example, an organization developing an AI-powered hiring system can use the repository to identify potential risks related to discrimination and bias. A company using AI for content moderation can leverage the “Misinformation” domain to understand the potential risks associated with AI-generated content and develop appropriate safeguards.
The research team acknowledges that while the repository offers a comprehensive foundation, organizations will need to tailor their risk assessment and mitigation strategies to their specific contexts. However, having a centralized and well-structured repository like this reduces the likelihood of overlooking critical risks.
“We expect the repository to become increasingly useful to enterprises over time,” Neil Thompson, head of the MIT FutureTech Lab, told VentureBeat. “In future phases of this project, we plan to add new risks and documents and ask experts to review our risks and identify omissions. After the next phase of research, we should be able to provide more useful information about which risks experts are most concerned about (and why) and which risks are most relevant to specific actors (e.g., AI developers versus large users of AI).”
Shaping future AI risk research
Beyond its practical implications for organizations, the AI Risk Repository is also a valuable resource for AI risk researchers. The database and taxonomies provide a structured framework for synthesizing information, identifying research gaps, and guiding future investigations.
“This database can provide a foundation to build on when doing more specific work,” Slattery said. “Before this, people like us had two choices. They could invest significant time to review the scattered literature to develop a comprehensive overview, or they could use a limited number of existing frameworks, which might miss relevant risks. Now they have a more comprehensive database, so our repository will hopefully save time and increase oversight. We expect it to be increasingly useful as we add new risks and documents.”
The research team plans to use the AI Risk Repository as a foundation for the next phase of their own research.
“We will use this repository to identify potential gaps or imbalances in how risks are being addressed by organizations,” Thompson said. “For example, to explore if there is a disproportionate focus on certain risk categories while others of equal significance are being underaddressed.”
In the meantime, the research team will update the AI Risk Repository as the AI risk landscape evolves, and they will make sure it remains a useful resource for researchers, policymakers, and industry professionals working on AI risks and risk mitigation.
Source link lol