AI

Normal Computing Selected for ARIA’s £50M Scaling Compute Programme

Normal Computing UK, an AI and hardware company, was selected as one of 12 teams awarded funding from the Advanced Research + Invention Agency (ARIA) Scaling Compute Programme. The programme, backed by £50M in funding, aims to reduce AI hardware costs by 1000x while diversifying the semiconductor supply chain. Normal Computing’s hardware initiative, led by Chief Scientist Dr. Patrick Coles, formerly from Los Alamos National Laboratory, will bring expertise in noise-based computing and thermodynamics to develop physics-based computing chips for matrix inversion and explore applications in training large-scale AI models to transform AI hardware efficiency. Normal Computing’s trademark “thermodynamic computing” approach…
Read More

Mimecast Announced AI-Powered Enhancements Across Product Offerings

New features utilize NLP to block BEC threats and prevent insider-driven data loss Mimecast, a leading global Human Risk Management platform, has announced AI-powered enhancements across its product offerings: Advanced Business Email Compromise (BEC) Protection and market-leading content inspection for its Incydr data protection solution. Cyber risk is evolving at break-neck speed, with a wide spectrum of threats organizations must combat. By deploying AI where it counts, Mimecast helps ensure businesses can keep ahead of attackers while safeguarding their critical IP. These two advancements – spanning email security and insider threat management – deploy natural language processing (NLP) to ensure…
Read More

EDB Announces Commitment to Achieving FedRAMP Authorization

EDB to meet heightened security standards for mission-critical workloads in regulated industries, supporting the future of sovereign data and AI EnterpriseDB (“EDB”), the leading Postgres® data and AI company, announces its plans to achieve Federal Risk and Authorization Management Program (FedRAMP®) Authorization, building on its strong foundation of delivering secure and compliant solutions to over 1,500 enterprise customers including numerous government and public sector organizations, including those associated with the DOD and Department of Justice (DOJ). FedRAMP delivers a standardized approach to cloud security across all federal agencies, the specific needs of safeguarding Controlled Unclassified Information (CUI) and National Security Systems (NSS) within…
Read More

Patronus AI Launches Industry-First Self-Serve API

New solution enables developers to safeguard AI systems against failures with unmatched accuracy, flexibility, and pay-as-you-go pricing Today, Patronus AI announced the launch of the Patronus API, the first self-serve solution that empowers developers to reliably detect and prevent AI failures in production. With the Patronus API, companies can now safeguard their generative AI systems against hallucinations, safety risks, and unexpected behavior with unparalleled precision and recall. Many companies face ongoing challenges with generative AI systems that fail in production, leading to issues like hallucinations, prompt injection attacks, and security risks. Current solutions have proven unreliable, with models like LlamaGuard and Prompt Guard…
Read More
Understanding Multimodal LLMs

Understanding Multimodal LLMs

It was a wild two months. There have once again been many developments in AI research, with two Nobel Prizes awarded to AI and several interesting research papers published. Among others, Meta AI released their latest Llama 3.2 models, which include open-weight versions for the 1B and 3B large language models and two multimodal models.In this article, I aim to explain how multimodal LLMs function. Additionally, I will review and summarize roughly a dozen other recent multimodal papers and models published in recent weeks (including Llama 3.2) to compare their approaches.An illustration of a multimodal LLM that can accept different input…
Read More
How Druva used Amazon Bedrock to address foundation model complexity when building Dru, Druva’s backup AI copilot | Amazon Web Services

How Druva used Amazon Bedrock to address foundation model complexity when building Dru, Druva’s backup AI copilot | Amazon Web Services

This post is co-written with David Gildea and Tom Nijs from Druva. Druva enables cyber, data, and operational resilience for thousands of enterprises, and is trusted by 60 of the Fortune 500. Customers use Druva Data Resiliency Cloud to simplify data protection, streamline data governance, and gain data visibility and insights. Independent software vendors (ISVs) like Druva are integrating AI assistants into their user applications to make software more accessible. Dru, the Druva backup AI copilot, enables real-time interaction and personalized responses, with users engaging in a natural conversation with the software. From finding inconsistencies and errors across the environment…
Read More
Use Amazon Q to find answers on Google Drive in an enterprise | Amazon Web Services

Use Amazon Q to find answers on Google Drive in an enterprise | Amazon Web Services

Amazon Q Business is a generative AI-powered assistant designed to enhance enterprise operations. It’s a fully managed service that helps provide accurate answers to users’ questions while adhering to the security and access restrictions of the content. You can tailor Amazon Q Business to your specific business needs by connecting to your company’s information and enterprise systems using built-in connectors to a variety of enterprise data sources. It enables users in various roles, such as marketing managers, project managers, and sales representatives, to have tailored conversations, solve business problems, generate content, take action, and more, through a web interface. This…
Read More
Best practices and lessons for fine-tuning Anthropic’s Claude 3 Haiku on Amazon Bedrock | Amazon Web Services

Best practices and lessons for fine-tuning Anthropic’s Claude 3 Haiku on Amazon Bedrock | Amazon Web Services

Fine-tuning is a powerful approach in natural language processing (NLP) and generative AI, allowing businesses to tailor pre-trained large language models (LLMs) for specific tasks. This process involves updating the model’s weights to improve its performance on targeted applications. By fine-tuning, the LLM can adapt its knowledge base to specific data and tasks, resulting in enhanced task-specific capabilities. To achieve optimal results, having a clean, high-quality dataset is of paramount importance. A well-curated dataset forms the foundation for successful fine-tuning. Additionally, careful adjustment of hyperparameters such as learning rate multiplier and batch size plays a crucial role in optimizing the…
Read More
Get ready to lose to Transformers on Lichess

Get ready to lose to Transformers on Lichess

“Figure 4: Two options to win the game in 3 or 5 moves, respectively (more options exist). Since they both map into the highest-value bin our bot ignores Nh6+, the fastest way to win (in 3), and instead plays Nd6+ (matein-5). Unfortunately, a state-based predictor without explicit search cannot guarantee that it will continue playing the Nd6+ strategy and thus might randomly alternate between different strategies. Overall this increases the risk of drawing the game or losing due to a subsequent (low-probability) mistake, such as a bad softmax sample. Board from a game between our 9M Transformer (white) and a…
Read More
Track, allocate, and manage your generative AI cost and usage with Amazon Bedrock | Amazon Web Services

Track, allocate, and manage your generative AI cost and usage with Amazon Bedrock | Amazon Web Services

As enterprises increasingly embrace generative AI , they face challenges in managing the associated costs. With demand for generative AI applications surging across projects and multiple lines of business, accurately allocating and tracking spend becomes more complex. Organizations need to prioritize their generative AI spending based on business impact and criticality while maintaining cost transparency across customer and user segments. This visibility is essential for setting accurate pricing for generative AI offerings, implementing chargebacks, and establishing usage-based billing models. Without a scalable approach to controlling costs, organizations risk unbudgeted usage and cost overruns. Manual spend monitoring and periodic usage limit…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.