GenAI

How to Prepare Your Business for the EU AI Act With KPMG’s EU AI Hub

How to Prepare Your Business for the EU AI Act With KPMG’s EU AI Hub

The EU AI Hub, launched last week by AI security firm Cranium with KPMG and Microsoft, is a service designed to assist businesses in complying with the newly adopted EU AI Act. With expert advice and bespoke technologies, users will be taken through a series of steps to identify what parts of the AI Act apply to their products and what they need to do to comply. On March 13, 2024, the European Union Parliament voted the AI Act into law. This means businesses that offer AI products in the region will soon need to abide by its strict rules…
Read More
AI headphones let wearer listen to a single person in a crowd, by looking at them just once

AI headphones let wearer listen to a single person in a crowd, by looking at them just once

Noise-canceling headphones have gotten very good at creating an auditory blank slate. But allowing certain sounds from a wearer's environment through the erasure still challenges researchers. The latest edition of Apple's AirPods Pro, for instance, automatically adjusts sound levels for wearers -- sensing when they're in conversation, for instance -- but the user has little control over whom to listen to or when this happens. A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to "enroll" them. The system, called "Target Speech…
Read More
The third New England RLHF Hackers Hackathon

The third New England RLHF Hackers Hackathon

At the third New England RLHF Hackathon, several interesting projects were showcased, each focusing on different aspects of machine learning and reinforcement learning. Participants and those interested in future events are encouraged to join the Discord community for more information and updates. Join the discord community The highlighted projects include: Pink Elephants Pt 3 (Authors: Sid Verma, Louis Castricato): This project aimed to train a pink elephant model via ILQL (Inverse Learning from Q-learning), using the standard trlX implementation. The team faced challenges in finding optimal hyperparameters and proposed future research that includes more nuanced reward shaping and combining different…
Read More

PRECISE Seminar Talk “Evaluating and Calibrating AI Models with Uncertain Ground Truth” • David Stutz

PRECISE Seminar Talk “Evaluating and Calibrating AI Models with Uncertain Ground Truth” I had the pleasure to present our work on evaluating and calibrating with uncertain ground truth at the seminar series of the PRECISE center at the University of Pennsylvania. Besides talking about our recent papers on evaluating AI models in health with uncertain ground truth and conformal prediction with uncertain ground truth, I also got to learn more about the research at PRECISE through post-doc and student presentations. In this article, I want to share the corresponding slides. Abstract For safety, AI systems in health undergo thorough evaluations…
Read More
Fullstory’s new platform enables harnessing customer behavioral data as a standalone source – SiliconANGLE

Fullstory’s new platform enables harnessing customer behavioral data as a standalone source – SiliconANGLE

Data analytics startup Fullstory Inc. says it wants to enhance the capabilities of artificial intelligence applications by providing them with the behavioral data they need to understand customer’s sentiments properly. To do this, it has announced a new platform called Data Direct, which automates the collection, synchronization and cleaning of structured, AI-ready behavioral data that can be fed into any application. By doing this, it says it can provide companies with more insightful web and mobile sentiment signals, allowing them to adapt the way they engage with customers and, hopefully, enable more positive interactions. Data Direct is said to transform…
Read More
Anthropic’s Generative AI Research Reveals More About How LLMs Affect Security and Bias

Anthropic’s Generative AI Research Reveals More About How LLMs Affect Security and Bias

Because large language models operate using neuron-like structures that may link many different concepts and modalities together, it can be difficult for AI developers to adjust their models to change the models’ behavior. If you don’t know what neurons connect what concepts, you won’t know which neurons to change. On May 21, Anthropic created a remarkably detailed map of the inner workings of the fine-tuned version of its Claude 3 Sonnet 3.0 model. With this map, the researchers can explore how neuron-like data points, called features, affect a generative AI’s output. Otherwise, people are only able to see the output…
Read More
Robots’ and prosthetic hands’ sense of touch could be as fast as humans

Robots’ and prosthetic hands’ sense of touch could be as fast as humans

Research at Uppsala University and Karolinska Institutet could pave the way for a prosthetic hand and robot to be able to feel touch like a human hand. Their study has been published in the journal Science. The technology could also be used to help restore lost functionality to patients after a stroke. "Our system can determine what type of object it encounters as fast as a blindfolded person, just by feeling it and deciding whether it is a tennis ball or an apple, for example," says Zhibin Zhang, docent at the Department of Electrical Engineering at Uppsala University. He and…
Read More
Diff-in-Means Concept Editing is Worst-Case Optimal

Diff-in-Means Concept Editing is Worst-Case Optimal

In our recent paper LEACE: Perfect linear concept erasure in closed form, we showed that in order to fully erase the linearly available information about a binary concept in a neural representation, it is both necessary and sufficient to neutralize the span of the difference-in-means direction between the two classes. Even more recently, Sam Marks and Max Tegmark showed that the behavior of transformers can be effectively manipulated by adding vectors in the span of the difference-in-means direction to the residual stream. In this post, we offer a theoretical explanation for these results by showing that interventions on the difference-in-means…
Read More

Vanderbilt Machine Learning Seminar Talk “Conformal Prediction under Ambiguous Ground Truth” • David Stutz

Vanderbilt Machine Learning Seminar Talk “Conformal Prediction under Ambiguous Ground Truth” Last week, I presented our work on Monte Carlo conformal prediction — conformal prediction with ambiguous and uncertain ground truth — at the Vanderbilt Machine Learning Seminar Series. In this work, we show how to adapt standard conformal prediction if there are no unique ground truth labels available due to disagreement among experts during annotation. In this article, I want to share the slides of my talk. Abstract Conformal Prediction (CP) allows to perform rigorous uncertainty quantification by constructing a prediction set $C(X)$ satisfying $mathbb{P}_{agg}(Y in C(X))geq 1-alpha$ for…
Read More
Datorios enhances data streaming visibility to support more reliable real-time AI systems – SiliconANGLE

Datorios enhances data streaming visibility to support more reliable real-time AI systems – SiliconANGLE

Streaming data observability startup Datorios Ltd. today announced the immediate availability of a new real-time observability platform for the big-data processing framework Apache Flink. With the new platform, companies will benefit from what the startup claims are previously unseen insights relating to streaming data processing. These insights can aid in the creation of new, real-time artificial intelligence systems that can be fully audited to ensure they don’t misbehave, the startup said. Datorios’ founders say they have applied years of experience in the research and development of real-time military intelligence systems to create their new product, with the end goal being…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.