Amazon Expands Guardrails Feature for Bedrock

Architecture of OpenAI


(Michael Vi/Shutterstock)

Amazon is among several tech companies that implemented the recommendations by the White House to set guardrails for the responsible use of GenAI. This was part of an industry-wide initiative to promote safe and ethical AI development and deployment. 

In April 2024, Amazon unveiled Guardrails for Amazon Bedrock, the company’s enterprise platform for building and scaling generative AI applications. The feature allows users to block harmful content and evaluate model safety and accuracy based on application requirements and responsible AI policies. 

The Guardrails for Amazon Bedrock offers customizable safeguards on top of the native protection. Amazon claims that it can block as much as 85% more harmful content and filter over 75% hallucinated responses for RAG and summarization workloads. 

Building on its Guardrail capabilities, Amazon Web Services (AWS) has introduced a standalone Guardrail API feature at the AWS Summit In New York on July 10. 

The ApplyGuardrail API enables customers to establish safeguards for their GenAI applications across different foundation models, including self-managed and third-party models. This means that AWS customers can apply safeguards to GenAI applications that are hosted outside the AWS infrastructure. 

The new API can also be used to independently evaluate user inputs and model responses at various stages of the GenAI application, offering more flexibility in application development. For example, in RAG applications, users can filter harmful inputs before they reach the knowledge base while also having the ability to separately evaluate the output after the retrieval and generation process. 

“Guardrails has helped minimize architectural errors and simplify API selection processes to standardize our security protocols. As we continue to evolve our AI strategy, Amazon Bedrock and its Guardrails feature are proving to be invaluable tools in our journey toward more efficient, innovative, secure, and responsible development practices,” said Andres Hevia Vega, Deputy Director of Architecture at MAPFRE, one of the largest insurance companies in Spain.   

ApplyGuardrail API is available in all AWS regions where Guardrails for Amazon Bedrock is available. 

The tech giant also announced new Contextual Grounding capabilities at the NY Summit. This feature allows users to check for AI hallucinations, addressing one of the key challenges in using GenAI. 

AWS customers rely on the inherent capabilities of the foundation models to generate grounded responses based on the company’s source data. However, when the foundation model produces incorrect, or irrelevant information, it casts doubt on the reliability of the GenAI application. AI models can often combine or conflate data to generate information that is biased or inaccurate. 

(a-image/Shutterstock)

To help overcome this challenge, AWS has introduced Contextual Grounding which adds a new safeguard to detect AI hallucinations before the responses reach the user. Amazon claims that Contextual Grounding can detect and filter more than 75% of AI hallucinations in various use cases including information extraction, RAG, and summarization. 

The Contextual Grounding update is based on two filtering parameters. The first is to establish a grounding threshold, which is the minimum confidence score for a model response to be grounded. 

The other parameter is based on a relevance threshold, which establishes a minimum confidence score for the model’s response relevant to the query. Any response below these two thresholds is blocked and returned. Users are offered the flexibility of adjusting the accuracy tolerance based on their specific use case. 

The introduction of features like Contextual Grounding and the ApplyGuardrail API reflects Amazon’s commitment to fostering a safe and responsible environment for GenAI development and deployment. As one of the leaders in the industry, Amazon can inspire other tech companies to adopt responsible AI frameworks. 

Related Items

Credo AI Unveils GenAI Guardrails to Help Organizations Harness Generative AI Tools Safely and Responsibly

DataRobot ‘Guard Models’ Keep GenAI on the Straight and Narrow

Rethinking ‘Open’ for AI



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.