AI

Scale ML workflows with Amazon SageMaker Studio and Amazon SageMaker HyperPod | Amazon Web Services

Scale ML workflows with Amazon SageMaker Studio and Amazon SageMaker HyperPod | Amazon Web Services

Scaling machine learning (ML) workflows from initial prototypes to large-scale production deployment can be daunting task, but the integration of Amazon SageMaker Studio and Amazon SageMaker HyperPod offers a streamlined solution to this challenge. As teams progress from proof of concept to production-ready models, they often struggle with efficiently managing growing infrastructure and storage needs. This integration addresses these hurdles by providing data scientists and ML engineers with a comprehensive environment that supports the entire ML lifecycle, from development to deployment at scale. In this post, we walk you through the process of scaling your ML workloads using SageMaker Studio…
Read More
Build generative AI applications quickly with Amazon Bedrock IDE in Amazon SageMaker Unified Studio | Amazon Web Services

Build generative AI applications quickly with Amazon Bedrock IDE in Amazon SageMaker Unified Studio | Amazon Web Services

Building generative AI applications presents significant challenges for organizations: they require specialized ML expertise, complex infrastructure management, and careful orchestration of multiple services. To address these challenges, we introduce Amazon Bedrock IDE, an integrated environment for developing and customizing generative AI applications. Formerly known as Amazon Bedrock Studio, Amazon Bedrock IDE is now incorporated into the Amazon SageMaker Unified Studio (currently in preview). SageMaker Unified Studio combines various AWS services, including Amazon Bedrock, Amazon SageMaker, Amazon Redshift, Amazon Glue, Amazon Athena, and Amazon Managed Workflows for Apache Airflow (MWAA), into a comprehensive data and AI development platform. In this blog…
Read More
A guide to Amazon Bedrock Model Distillation (preview) | Amazon Web Services

A guide to Amazon Bedrock Model Distillation (preview) | Amazon Web Services

When using generative AI, achieving high performance with low latency models that are cost-efficient is often a challenge, because these goals can clash with each other. With the newly launched Amazon Bedrock Model Distillation feature, you can use smaller, faster, and cost-efficient models that deliver use-case specific accuracy that is comparable to the largest and most capable models in Amazon Bedrock for those specific use cases. Model distillation is the process of transferring knowledge from a more capable advanced model (teacher) to a smaller model (student), which is faster and more cost efficient to make the student model as performant…
Read More
ABBYY Positioned as Leader in 2024 SPARK Matrix™ for IDP by QKS Group

ABBYY Positioned as Leader in 2024 SPARK Matrix™ for IDP by QKS Group

The QKS Group SPARK Matrix provides competitive analysis and ranking of the leading Intelligent Document Processing vendors. ABBYY, with its comprehensive technology and customer experience management, has received strong ratings across the parameters of technology excellence and customer impact.  QKS Group has named ABBYY as a Leader in their 2024 SPARK Matrix analysis of Intelligent Document Processing (IDP) market. The QKS Group SPARK Matrix evaluates vendors based on technology excellence and customer impact. It offers an in-depth analysis of global market dynamics, major trends, vendor landscapes, and competitive positioning. By providing competitive analysis and ranking of leading technology vendors, the SPARK Matrix…
Read More
Amazon Bedrock Marketplace now includes NVIDIA models: Introducing NVIDIA Nemotron-4 NIM microservices | Amazon Web Services

Amazon Bedrock Marketplace now includes NVIDIA models: Introducing NVIDIA Nemotron-4 NIM microservices | Amazon Web Services

This post is co-written with Abhishek Sawarkar, Eliuth Triana, Jiahong Liu and Kshitiz Gupta from NVIDIA.  At AWS re:Invent 2024, we are excited to introduce Amazon Bedrock Marketplace. This a revolutionary new capability within Amazon Bedrock that serves as a centralized hub for discovering, testing, and implementing foundation models (FMs). It provides developers and organizations access to an extensive catalog of over 100 popular, emerging, and specialized FMs, complementing the existing selection of industry-leading models in Amazon Bedrock. Bedrock Marketplace enables model subscription and deployment through managed endpoints, all while maintaining the simplicity of the Amazon Bedrock unified APIs. The…
Read More
Real value, real time: Production AI with Amazon SageMaker and Tecton | Amazon Web Services

Real value, real time: Production AI with Amazon SageMaker and Tecton | Amazon Web Services

This post is cowritten with Isaac Cameron and Alex Gnibus from Tecton. Businesses are under pressure to show return on investment (ROI) from AI use cases, whether predictive machine learning (ML) or generative AI. Only 54% of ML prototypes make it to production, and only 5% of generative AI use cases make it to production. ROI isn’t just about getting to production—it’s about model accuracy and performance. You need a scalable, reliable system with high accuracy and low latency for the real-time use cases that directly impact the bottom line every millisecond. Fraud detection, for example, requires extremely low latency…
Read More
Use Amazon Bedrock tooling with Amazon SageMaker JumpStart models | Amazon Web Services

Use Amazon Bedrock tooling with Amazon SageMaker JumpStart models | Amazon Web Services

Today, we’re excited to announce a new capability that allows you to deploy over 100 open-weight and proprietary models from Amazon SageMaker JumpStart and register them with Amazon Bedrock, allowing you to seamlessly access them through the powerful Amazon Bedrock APIs. You can now use Amazon Bedrock features such as Amazon Bedrock Knowledge Bases and Amazon Bedrock Guardrails with models deployed through SageMaker JumpStart. SageMaker JumpStart helps you get started with machine learning (ML) by providing fully customizable solutions and one-click deployment and fine-tuning of more than 400 popular open-weight and proprietary generative AI models. Amazon Bedrock is a fully…
Read More
Introducing Amazon Kendra GenAI Index – Enhanced semantic search and retrieval capabilities | Amazon Web Services

Introducing Amazon Kendra GenAI Index – Enhanced semantic search and retrieval capabilities | Amazon Web Services

Amazon Kendra is an intelligent enterprise search service that helps you search across different content repositories with built-in connectors. AWS customers use Amazon Kendra with large language models (LLMs) to quickly create secure, generative AI–powered conversational experiences on top of your enterprise content. As enterprises adopt generative AI, many are developing intelligent assistants powered by Retrieval Augmented Generation (RAG) to take advantage of information and knowledge from their enterprise data repositories. This approach combines a retriever with an LLM to generate responses. A retriever is responsible for finding relevant documents based on the user query. Customers seek to build comprehensive…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.