A progress update on our commitment to safe, responsible generative AI | Amazon Web Services

A progress update on our commitment to safe, responsible generative AI | Amazon Web Services


Responsible AI is a longstanding commitment at Amazon. From the outset, we have prioritized responsible AI innovation by embedding safety, fairness, robustness, security, and privacy into our development processes and educating our employees. We strive to make our customers’ lives better while also establishing and implementing the necessary safeguards to help protect them. Our practical approach to transform responsible AI from theory into practice, coupled with tools and expertise, enables AWS customers to implement responsible AI practices effectively within their organizations. To date, we have developed over 70 internal and external offerings, tools, and mechanisms that support responsible AI, published or funded over 500 research papers, studies, and scientific blogs on responsible AI, and delivered tens of thousands of hours of responsible AI training to our Amazon employees. Amazon also continues to expand its portfolio of free responsible AI training courses for people of all ages, backgrounds, and levels of experience.

Today, we are sharing a progress update on our responsible AI efforts, including the introduction of new tools, partnerships, and testing that improve the safety, security, and transparency of our AI services and models.

Launched new tools and capabilities to build and scale generative AI safely, supported by adversarial style testing (i.e., red teaming)

In April 2024, we announced the general availability of Guardrails for Amazon Bedrock and Model Evaluation in Amazon Bedrock to make it easier to introduce safeguards, prevent harmful content, and evaluate models against key safety and accuracy criteria. Guardrails is the only solution offered by a major cloud provider that enables customers to build and customize safety and privacy protections for their generative AI applications in a single solution. It helps customers block up to 85% of harmful content on top of the native protection from FMs on Amazon Bedrock.

In May, we published a new AI Service Card for Amazon Titan Text Premier to further support our investments in responsible, transparent generative AI. AI Service Cards are a form of responsible AI documentation that provide customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for our AI services and models. We’ve created more than 10 AI Service Cards thus far to deliver transparency for our customers as part of our comprehensive development process that addresses fairness, explainability, veracity and robustness, governance, transparency, privacy and security, safety, and controllability.

AI systems can also have performance flaws and vulnerabilities that can increase risk around security threats or harmful content. At Amazon, we test our AI systems and models, such as Amazon Titan, using a variety of techniques, including manual red-teaming. Red-teaming engages human testers to probe an AI system for flaws in an adversarial style, and complements our other testing techniques, which include automated benchmarking against publicly available and proprietary datasets, human evaluation of completions against proprietary datasets, and more. For example, we have developed proprietary evaluation datasets of challenging prompts that we use to assess development progress on Titan Text. We test against multiple use cases, prompts, and data sets because it is unlikely that a single evaluation dataset can provide an absolute picture of performance. Altogether, Titan Text has gone through multiple iterations of red-teaming on issues including safety, security, privacy, veracity, and fairness.

Introduced watermarking to enable users to determine if visual content is AI-generated

A common use case for generative AI is the creation of digital content, like images, videos, and audio, but to help prevent disinformation, users need to be able to able to identify AI-generated content. Techniques such as watermarking can be used to confirm if it comes from a particular AI model or provider. To help reduce the spread of disinformation, all images generated by Amazon Titan Image Generator have an invisible watermark by default. It is designed to be tamper-resistant, helping increase transparency around AI-generated content and combat disinformation. We also introduced a new API (preview) in Amazon Bedrock that checks for the existence of this watermark and helps you confirm whether an image was generated by Titan Image Generator.

Promoted collaboration among companies and governments regarding trust and safety risks

Collaboration among companies, governments, researchers, and the AI community is critical to foster the development of AI that is safe, responsible, and trustworthy. In February 2024, Amazon joined the U.S. Artificial Intelligence Safety Institute Consortium, established by the National Institute of Standards and Technology (NIST). Amazon is collaborating with NIST to establish a new measurement science to enable the identification of scalable, interoperable measurements and methodologies to promote development of trustworthy AI. We are also contributing $5 million in AWS compute credits to the Institute for the development of tools and methodologies to evaluate the safety of foundation models. Also in February, Amazon joined the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” at the Munich Security Conference. This is an important part of our collective work to advance safeguards against deceptive activity and protect the integrity of elections.

We continue to find new ways to engage in and encourage information-sharing among companies and governments as the technology continues to evolve. This includes our work with Thorn and All Tech is Human to safely design our generative AI services to reduce the risk that they will be misused for child exploitation. We’re also a member of the Frontier Model Forum to advance the science, standards, and best practices in the development of frontier AI models.

Used AI as a force for good to address society’s greatest challenges and supported initiatives that foster education

At Amazon, we are committed to promoting the safe and responsible development of AI as a force for good. We continue to see examples across industries where generative AI is helping to address climate change and improve healthcare. Brainbox AI, a pioneer in commercial building technology, launched the world’s first generative AI-powered virtual building assistant on AWS to deliver insights to facility managers and building operators that will help optimize energy usage and reduce carbon emissions. Gilead, an American biopharmaceutical company, accelerates life-saving drug development with AWS generative AI by understanding a clinical study’s feasibility and optimizing site selection through AI-driven protocol analysis utilizing both internal and real-world datasets.

As we navigate the transformative potential of these technologies, we believe that education is the foundation for realizing their benefits while mitigating risks. That’s why we offer education on potential risks surrounding generative AI systems. Amazon employees have spent tens of thousands of training hours since July 2023, covering a range of critical topics like risk assessments, as well as deep dives into complex considerations surrounding fairness, privacy, and model explainability. As part of Amazon’s “AI Ready” initiative to provide free AI skills training to 2 million people globally by 2025, we’ve launched new free training courses about safe and responsible AI use on our digital learning centers. The courses include “Introduction to Responsible AI” for new-to-cloud learners on AWS Educate and courses like “Responsible AI Practices,” and “Security, Compliance, and Governance for AI Solutions” on AWS Skill Builder.

Delivering groundbreaking innovation with trust at the forefront

As an AI pioneer, Amazon continues to foster the safe, responsible, and trustworthy development of AI technology. We are dedicated to driving innovation on behalf of our customers while also establishing and implementing the necessary safeguards. We’re also committed to working with companies, governments, academia, and researchers alike to deliver groundbreaking generative AI innovation with trust at the forefront.


About the author

Vasi Philomin is VP of Generative AI at AWS. He leads generative AI efforts, including Amazon Bedrock and Amazon Titan.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.