Adobe introduces AI-powered eraser to Lightroom

Adobe introduces AI-powered eraser to Lightroom

Say goodbye to photobombs. Adobe is introducing an AI-driven Generative Remove feature to its Lightroom photo editor. This feature simplifies the removal of unwanted elements like that annoying person in the background. Currently in public beta, it works seamlessly across the Lightroom ecosystem on mobile, desktop, and web platforms.Streamlined editing with Firefly AILightroom's Generative Remove effortlessly replaces unwanted elements using Adobe's Firefly AI engine. Paint over the area you want to remove, and Lightroom sends this information to Adobe's Firefly servers, which process the data and return the edited image. In contrast to Adobe Photoshop's Reference Image feature, which allows…
Read More
OpenAI secures key partnership with Reddit

OpenAI secures key partnership with Reddit

OpenAI has secured a deal to access real-time content from Reddit through the platform’s data API.  This allows OpenAI to incorporate conversations from Reddit into ChatGPT and other new products, echoing a previous agreement that the platform had with Google, reportedly valued at $60 million. The partnership enables OpenAI to better sample the datasets on which their models are trained, allowing AI systems to become more precise and context-aware. For human communication and natural language processing, this means models like ChatGPT can stay continually updated with one of the vastest collections of public discourse available, enabling them to respond more effectively. As part of this collaboration, Reddit will be…
Read More
What ScarJo v. ChatGPT Could Look Like in Court

What ScarJo v. ChatGPT Could Look Like in Court

It doesn’t matter whether a person’s actual voice is used in an imitation or not, Rothman says, only whether that audio confuses listeners. In the legal system, there is a big difference between imitation and simply recording something “in the style” of someone else. “No one owns a style,” she says.Other legal experts don’t see what OpenAI did as a clear-cut impersonation. “I think that any potential ‘right of publicity’ claim from Scarlett Johansson against OpenAI would be fairly weak given the only superficial similarity between the ‘Sky’ actress' voice and Johansson, under the relevant case law,” Colorado law professor…
Read More
Data Machina #252

Data Machina #252

Diffusion, FM & Pre-Trained AI models for Time-Series. DeepNN-based models are starting to match or even outperform statistical time-series analysis & forecasting methods in some scenarios. Yet, DeepNN-based models for time-series suffer from 4 key issues: 1) complex architecture 2) enormous amount of time required for training 3) high inference costs, and 4) poor context sensitivity.Latest innovative approaches. To address those issues, a new breed of foundation or pre-trained AI models for time-series is emerging. Some of these new AI models use hybrid approaches borrowing from NLP, vision/ image, or physics modelling, like: transformers, diffusion models, KANs and state space…
Read More
LLM Task-Specific Evals that Do & Don’t Work

LLM Task-Specific Evals that Do & Don’t Work

If you’ve ran off-the-shelf evals for your tasks, you may have found that most don’t work. They barely correlate with application-specific performance and aren’t discriminative enough to use in production. As a result, we could spend weeks and still not have evals that reliably measure how we’re doing on our tasks. To save us some time, I’m sharing some evals I’ve found useful. The goal is to spend less time figuring out evals so we can spend more time shipping to users. We’ll focus on simple, common tasks like classification/extraction, summarization, and translation. (Although classification evals are basic, having a…
Read More
Tips for LLM Pretraining and Evaluating Reward Models

Tips for LLM Pretraining and Evaluating Reward Models

It's another month in AI research, and it's hard to pick favorites.Besides new research, there have also been many other significant announcements. Among them, xAI has open-sourced its Grok-1 model, which, at 314 billion parameters, is the largest open-source model yet. Additionally, reports suggest that Claude-3 is approaching or even exceeding the performance of GPT-4. Then there’s also Open-Sora 1.0 (a fully open-source project for video generation), Eagle 7B (a new RWKV-based model), Mosaic’s 132 billion parameter DBRX (a mixture-of-experts model), and AI21's Jamba (a Mamba-based SSM-transformer model).However, since detailed information about these models is quite scarce, I'll focus on…
Read More
Microsoft CEO Bashes Human-Like AI After OpenAI’s Scarlett Johansson Scandal

Microsoft CEO Bashes Human-Like AI After OpenAI’s Scarlett Johansson Scandal

"I don’t need any artificial intelligence."I, RobotAfter OpenAI got in trouble for copying actor Scarlett Johansson's voice for a new ChatGPT voice assistant, the head honcho at Microsoft — a major investor and close partner of OpenAI — bashed human-like AIs in a surprising interview on Monday."I don't like anthropomorphizing AI," Microsoft CEO Satya Nadella told Bloomberg Television. "I sort of believe it's a tool.""It has got intelligence, if you want to give it that moniker, but it’s not the same intelligence that I have," he added, while also dinging the term "artificial intelligence.""I think one of the most unfortunate…
Read More
Yi-34B, Llama 2, and common practices in LLM training: a fact check of the New York Times

Yi-34B, Llama 2, and common practices in LLM training: a fact check of the New York Times

On February 21 2024, the New York Times published “China’s Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology.” The authors claim that Yi-34B, a recent large language model by the Chinese startup 01.AI, is fundamentally indebted to Meta’s Llama 2: There was just one twist: Some of the technology in 01.AI’s system came from Llama. Mr. Lee’s start-up then built on Meta’s technology, training its system with new data to make it more powerful. This assessment is based on a misreading of the cited Hugging Face issue. While we make no claims about the overall…
Read More
The Role of Synthetic Data in Cybersecurity

The Role of Synthetic Data in Cybersecurity

Data's value is something of a double-edged sword. On one hand, digital data lays the groundwork for powerful AI applications, many of which could change the world for the better. Conversely, storing so many details on people creates huge privacy risks. Synthetic data provides a possible solution. What Is Synthetic Data? Synthetic data is a subset of anonymized data – data that doesn't reveal any real-world details. More specifically, it refers to information that looks and acts like real-world data but has no ties to actual people, places or events. In short, it's fake data that can produce real results. In…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.