It’s the issue that takes the headlines and the question that keeps execs up at night: How do organizations audit their AI models for bias, performance and ethical standards?
VentureBeat welcomed UiPath and others to the latest VB AI Impact Tour in New York City to talk about methodologies, best practices and real world case studies. Michael Raj, VP of network enablement (AI and data) at Verizon Communications, Rebecca Qian, co-founder and CTO at Patronus AI, and Matt Turck, managing director at FirstMark offered up distinct points of view. Closing the event, VB CEO Matt Marshall spoke with Justin Greenberger, SVP client success at UiPath, about what audit success looks like, and where to start.
“The risk landscape used to be evaluated on an annual basis,” Greenberger said. “I think the risk landscape needs to be evaluated almost monthly now. Do you understand your risks? Do you understand the controls that are mitigating them and how to evaluate that? IIA [Institute of Internal Auditors] just came out with their updated AI framework. It’s good, but again, it’s a lot of basics. What are your monitoring KPIs? What’s the transparency from the data source? Do you have sourceability? Do you have accountability? Do you have people signing off on the data sources? The evaluation cycle should be a lot tighter.”
He pointed to GDPR, which was widely viewed as over-regulation at the time, but which has ultimately created the data security foundation for most companies that exist today. What’s interesting about generative AI is that instead of the usual lag that comes in countries with stricter regulations, markets across the globe are keeping pace with one another, evolving at essentially the same speed — leveling the competitive field as organizations consider their risk tolerance across all axes of the technology as well as its potential ramifications.
Challenges as pilots and proof of concepts explode
True enterprise-wide transformation is still fairly nascent, but a huge number of companies have initial projects in place, testing the waters to some extent. Some challenges always remain the same — for instance, finding subject matter experts who have the contextual understanding and critical thinking skills required to establish the parameters of use cases and how they should be implemented. Another common audit and control challenge is enablement and engagement, which entails employee education, though at this stage of the gen AI revolution the full scope of what employees should and should not know or do is still not entirely clear, Greenberger said, especially as technology like deep fakes gain traction.
Lastly is catching up on the componentized implementation of generative AI. Organizations are largely adding generative AI to their workflows rather than overhauling entire processes, and audits will need to adapt as it becomes more widespread — for instance, monitoring the way private data is being pulled into and leveraged in a medical use case.
How the role of the human will evolve
Humans remain in the loop for now, as risks and controls continue to evolve along with the technology, Greenberger said. A user first queries, then gen AI makes the calculations, and supplies the data that the employee needs to do their job. At a logistics provider, it might be a job quote that the employee accepts and offers the to the customer. That decision and direct interaction with the customer is a human role that might end up on the chopping block however.
“Humans will still have a decisioning process as of now,” Greenberger said. “As we get more comfortable with the audit controls and spot checks over time, you’ll see that lessen. Will humans take on more of the creative and the emotional aspect? That’s what we get educated on as managers and executives now. Focus on creative and emotional concepts, because your decision-making responsibilities might be taken away from you. That’s more of a matter of time than anything.”
Source link
lol