The buzzy bot’s technology — generative AI — could write emails, produce code, and materialize graphics in minutes. Suddenly, the days in which workers pored over their inboxes and painstakingly crafted presentations seemed like a relic of the past.
Companies, lured by profit and productivity gains, rushed to adopt the technology. According to a May survey from consulting firm Mckinsey & Company, 65% of the more than 1,300 companies it researched said they now regularly use generative AI — double the number using it the year before.
But the risks of misusing the technology loom large. Generative AI can hallucinate, spread misinformation, and reinforce biases against marginalized groups if it’s not managed properly. Given that the technology relies on volumes of sensitive data, the potential for data breaches is also high. At worst, though, there’s the danger that the more sophisticated it becomes, the less likely it is to align with human values.
With great power, then, comes great responsibility, and companies that make money from generative AI must also ensure they regulate it.
That’s where a chief ethics officer comes in.
A critical role in the age of AI
The details of the role vary from company to company but — broadly — they’re responsible for determining the impact a company’s use of AI might have on the larger society, according to Var Shankar, the chief AI and privacy officer at Enzai, a software platform for AI governance, risk, and compliance. “So beyond just your company and your bottom line, how does it affect your customers? How does it affect other people in the world? And then how does it affect the environment,” he told Business Insider. Then comes “building a program that standardizes and scales those questions every time you use AI.”
It’s a role that gives policy nerds and philosophy majors, alongside programming whizzes, a footing in the fast-changing tech industry. And it often comes with a sizable annual paycheck in the mid-six figures.
Right now, though, companies aren’t hiring people into these roles fast enough, according to Steve Mills, the chief AI ethics officer at Boston Consulting Group. “I think there’s a lot of talk about risk and principles, but little action to operationalize that within companies,” he said.
A C-suite level responsibility
Those who are successful in the role ideally have four areas of expertise, according to Mills. They should have a technical grasp over generative AI, experience building and deploying products, an understanding of the major laws and regulations around AI, and significant experience hiring and making decisions at an organization.
“Too often, I see people put midlevel managers in charge, and while they may have expertise, desire, and passion, they typically don’t have the stature to change things within the organization and rally legal, business, and compliance teams together,” he said. Every Fortune 500 company using AI at scale needs to charge an executive with overseeing a responsible AI program, he added.
Shankar, a lawyer by training, said that the role doesn’t warrant any specific educational background. The most important qualification is understanding a company’s data. That means having a handle on the “ethical implications of the data that you collect, use, where it comes from, where it was before it was in your organization, and what kinds of consent you have around it,” he said.
He pointed to the example of healthcare providers who could unintentionally perpetuate biases if they don’t have a firm grasp of their data. In a study published in Science, hospitals and health insurance companies that used an algorithm to identify patients that would benefit from “high-risk care management” ended up prioritizing healthier white patients over sicker black patients. That’s the kind of blunder an ethics officer can help companies avoid.
Collaborating across companies and industries
Those in the role should also be able to communicate confidently with various stakeholders.
Christina Montgomery, IBM’s vice president, chief privacy and trust officer, and chair of its AI Ethics Board, told BI that her days are usually packed with client meetings and events, alongside other responsibilities.
“I spent a lot of time externally, probably more time lately, in speaking at events and engaging with policymakers and on the external boards because I feel like we have very much an opportunity to influence and determine what the future looks like,” she said.
She sits on boards like the International Association of Privacy Professionals, which recently launched an Artificial Intelligence Governance Professional certification for individuals who want to lead the field of AI ethics. She also engages with government leaders and other chief ethics officers.
“I think it’s absolutely critical that we be talking to each other on a regular basis and sharing best practices, and we do a lot of that across companies,” she said.
She aims to develop a broader understanding of what’s happening on a societal level — something she sees as key to the role.
“My fear at the space that we are at right now is that there’s no interoperability globally among all these regulations, and what’s expected, and what’s right and wrong in terms of what companies are going to have to comply with,” she said. “We can’t operate in a world that way. So the conversations among companies, governments, and boards are so important right now.”
Source link
lol