The development and deployment of AI continue to evolve rapidly, and so do the risks associated with it. The nature of the risk varies greatly depending on the specific application, necessitating a tailored approach to risk management.
While the risks of AI across applications are well-documented, there is no single repository that contains comprehensive and unified information on these risks. An MIT lab is working to close that gap.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and MIT FutureTech have developed an “AI Risk Repository” – a sort of database of AI risks that contains hundreds of documented risks posed by AI systems.
Having initiated the project because it was essential for their research, the MIT team soon recognized that it could be valuable to many others as well. They started compiling a publicly accessible and comprehensive AI risk repository that decision-makers can use to assess the evolving risks of AI. The database can be useful to anyone from developers and researchers to policymakers and enterprises.
To compile the repository, the researchers worked with teams from the University of Queensland, the Future of Life Institute, KU Leuven, and AI startup Harmony Intelligence. An extensive search was carried out, including consultation with academic experts and databases, to identify 43 AI risk classification frameworks. From these, over 700 AI risks were extracted and categorized by risk domains, risk subdomains, and causes.
“The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. It is part of a larger effort to understand how we are responding to AI risks and to identify if there are gaps in our current approaches,” says Dr. Neil Thompson, head of the MIT FutureTech Lab and one of the lead researchers on the project
While compiling the Repository, the research team discovered gaps and inconsistencies in existing AI risk frameworks, which only covered a small portion of risks compared to MIT’s comprehensive AI Risk Repository. This can have significant implications for AI development, usage, and governance.
According to the researchers, third-party frameworks are often too focused on certain AI risks while overlooking others. For example, misinformation is a serious AI risk, yet only 44% of the frameworks cover it.
Similarly, more than half of the AI risk frameworks explored the potential for AI to perpetuate forms of discrimination, but only 12% covered pollution of the information ecosystem that can result in an increase in AI-generated spam and the degradation of information quality.
The AI Risk Repository “is part of a larger effort to understand how we are responding to AI risks and to identify if there are gaps in our current approaches,” said Dr. Neil Thompson, researcher and head of the FutureTech Lab.
“We are starting with a comprehensive checklist, to help us understand the breadth of potential risks. We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.”
The research team acknowledges that while the repository is comprehensive in many ways, it is not without limitations. Even though the researchers screened over 17,000 documents to draw 43 frameworks, this is not exhaustive. There could be other AI risks in unscreened documents.
There is also a significant limitation in the possibility of missing unpublished, niche, or emerging risks that are not available in general AI literature. Additionally, the Repository does not categorize risk by potentially important factors such as the likelihood of risk impact. It also does not discuss the interaction between the risks.
The AI Risk Repository is intended as a living document, allowing users to continually refine and enhance it as the AI risk landscape evolves.
In future phases of this project, the MIT team plans to identify omissions and add new risks and documents to the repository. They plan on including more information that can deliver targeted and context-specific insights such as implications for different types of users.
Related Items
Cloud Security Alliance Introduces Comprehensive AI Model Risk Management Framework
CSA Report Reveals AI’s Potential for Enhancing Offensive Security
Source link
lol