Bridging code and conscience: UMD’s quest for ethical and inclusive AI

Source: University of Maryland


As artificial intelligence systems increasingly permeate critical decision-making processes in our everyday lives, the integration of ethical frameworks into AI development is becoming a research priority. At the University of Maryland (UMD), interdisciplinary teams tackle the complex interplay between normative reasoning, machine learning algorithms, and socio-technical systems. 

In a recent interview with Artificial Intelligence News, postdoctoral researchers Ilaria Canavotto and Vaishnav Kameswaran combine expertise in philosophy, computer science, and human-computer interaction to address pressing challenges in AI ethics. Their work spans the theoretical foundations of embedding ethical principles into AI architectures and the practical implications of AI deployment in high-stakes domains such as employment.

Normative understanding of AI systems

Ilaria Canavotto, a researcher at UMD’s Values-Centered Artificial Intelligence (VCAI) initiative, is affiliated with the Institute for Advanced Computer Studies and the Philosophy Department. She is tackling a fundamental question: How can we imbue AI systems with normative understanding? As AI increasingly influences decisions that impact human rights and well-being, systems have to comprehend ethical and legal norms.

“The question that I investigate is, how do we get this kind of information, this normative understanding of the world, into a machine that could be a robot, a chatbot, anything like that?” Canavotto says.

Her research combines two approaches:

Top-down approach: This traditional method involves explicitly programming rules and norms into the system. However, Canavotto points out, “It’s just impossible to write them down as easily. There are always new situations that come up.”

Bottom-up approach: A newer method that uses machine learning to extract rules from data. While more flexible, it lacks transparency: “The problem with this approach is that we don’t really know what the system learns, and it’s very difficult to explain its decision,” Canavotto notes.

Canavotto and her colleagues, Jeff Horty and Eric Pacuit, are developing a hybrid approach to combine the best of both approaches. They aim to create AI systems that can learn rules from data while maintaining explainable decision-making processes grounded in legal and normative reasoning.

“[Our] approach […] is based on a field that is called artificial intelligence and law. So, in this field, they developed algorithms to extract information from the data. So we would like to generalise some of these algorithms and then have a system that can more generally extract information grounded in legal reasoning and normative reasoning,” she explains.

AI’s impact on hiring practices and disability inclusion

While Canavotto focuses on the theoretical foundations, Vaishnav Kameswaran, affiliated with UMD’s NSF Institute for Trustworthy AI and Law and Society, examines AI’s real-world implications, particularly its impact on people with disabilities.

Kameswaran’s research looks into the use of AI in hiring processes, uncovering how systems can inadvertently discriminate against candidates with disabilities. He explains, “We’ve been working to… open up the black box a little, try to understand what these algorithms do on the back end, and how they begin to assess candidates.”

His findings reveal that many AI-driven hiring platforms rely heavily on normative behavioural cues, such as eye contact and facial expressions, to assess candidates. This approach can significantly disadvantage individuals with specific disabilities. For instance, visually impaired candidates may struggle with maintaining eye contact, a signal that AI systems often interpret as lack of engagement.

“By focusing on some of those qualities and assessing candidates based on those qualities, these platforms tend to exacerbate existing social inequalities,” Kameswaran warns. He argues that this trend could further marginalise people with disabilities in the workforce, a group already facing significant employment challenges.

The broader ethical landscape

Both researchers emphasise that the ethical concerns surrounding AI extend far beyond their specific areas of study. They touch on several key issues:

  1. Data privacy and consent: The researchers highlight the inadequacy of current consent mechanisms, especially regarding data collection for AI training. Kameswaran cites examples from his work in India, where vulnerable populations unknowingly surrendered extensive personal data to AI-driven loan platforms during the COVID-19 pandemic.
  2. Transparency and explainability: Both researchers stress the importance of understanding how AI systems make decisions, especially when these decisions significantly impact people’s lives.
  3. Societal attitudes and biases: Kameswaran points out that technical solutions alone cannot solve discrimination issues. There’s a need for broader societal changes in attitudes towards marginalised groups, including people with disabilities.
  4. Interdisciplinary collaboration: The researchers’ work at UMD exemplifies the importance of cooperation between philosophy, computer science, and other disciplines in addressing AI ethics.

Looking ahead: solutions and challenges

While the challenges are significant, both researchers are working towards solutions:

  • Canavotto’s hybrid approach to normative AI could lead to more ethically-aware and explainable AI systems.
  • Kameswaran suggests developing audit tools for advocacy groups to assess AI hiring platforms for potential discrimination.
  • Both emphasise the need for policy changes, such as updating the Americans with Disabilities Act to address AI-related discrimination.

However, they also acknowledge the complexity of the issues. As Kameswaran notes, “Unfortunately, I don’t think that a technical solution to training AI with certain kinds of data and auditing tools is in itself going to solve a problem. So it requires a multi-pronged approach.”

A key takeaway from the researchers’ work is the need for greater public awareness about AI’s impact on our lives. People need to know how much data they share or how it’s being used. As Canavotto points out, companies often have an incentive to obscure this information, defining them as “Companies that try to tell you my service is going to be better for you if you give me the data.”

The researchers argue that much more needs to be done to educate the public and hold companies accountable. Ultimately, Canavotto and Kameswaran’s interdisciplinary approach, combining philosophical inquiry with practical application, is a path forward in the right direction, ensuring that AI systems are powerful but also ethical and equitable.

See also: Regulations to help or hinder: Cloudflare’s take

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , ,



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.