Viral News

State Combinatorial Generalization In Decision Making With Conditional Diffusion Models

State Combinatorial Generalization In Decision Making With Conditional Diffusion Models

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
Truncated Consistency Models

Truncated Consistency Models

[Submitted on 18 Oct 2024 (v1), last revised 23 Jan 2025 (this version, v2)] View a PDF of the paper titled Truncated Consistency Models, by Sangyun Lee and 6 other authors View PDF HTML (experimental) Abstract:Consistency models have recently been introduced to accelerate sampling from diffusion models by directly predicting the solution (i.e., data) of the probability flow ODE (PF ODE) from initial noise. However, the training of consistency models requires learning to map all intermediate points along PF ODE trajectories to their corresponding endpoints. This task is much more challenging than the ultimate objective of one-step generation, which only…
Read More
Automatic Fact-Checking with Frame-Semantics

Automatic Fact-Checking with Frame-Semantics

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
First Lessons Learned of an Artificial Intelligence Robotic System for Autonomous Coarse Waste Recycling Using Multispectral Imaging-Based Methods

First Lessons Learned of an Artificial Intelligence Robotic System for Autonomous Coarse Waste Recycling Using Multispectral Imaging-Based Methods

arXiv:2501.13855v1 Announce Type: cross Abstract: Current disposal facilities for coarse-grained waste perform manual sorting of materials with heavy machinery. Large quantities of recyclable materials are lost to coarse waste, so more effective sorting processes must be developed to recover them. Two key aspects to automate the sorting process are object detection with material classification in mixed piles of waste, and autonomous control of hydraulic machinery. Because most objects in those accumulations of waste are damaged or destroyed, object detection alone is not feasible in the majority of cases. To address these challenges, we propose a classification of materials with multispectral…
Read More
MSF: Efficient Diffusion Model Via Multi-Scale Latent Factorize

MSF: Efficient Diffusion Model Via Multi-Scale Latent Factorize

arXiv:2501.13349v1 Announce Type: new Abstract: Diffusion-based generative models have achieved remarkable progress in visual content generation. However, traditional diffusion models directly denoise the entire image from noisy inputs, disregarding the hierarchical structure present in visual signals. This method is computationally intensive, especially for high-resolution image generation. Signal processing often leverages hierarchical decompositions; for instance, Fourier analysis decomposes signals by frequency, while wavelet analysis captures localized frequency components, reflecting both spatial and frequency information simultaneously. Inspired by these principles, we propose a multiscale diffusion framework that generates hierarchical visual representations, which are subsequently integrated to form the final output. The diffusion…
Read More
Evaluating LLMs for Quotation Attribution in Literary Texts: A Case Study of LLaMa3

Evaluating LLMs for Quotation Attribution in Literary Texts: A Case Study of LLaMa3

[Submitted on 17 Jun 2024 (v1), last revised 23 Jan 2025 (this version, v2)] View a PDF of the paper titled Evaluating LLMs for Quotation Attribution in Literary Texts: A Case Study of LLaMa3, by Gaspard Michel and Elena V. Epure and Romain Hennequin and Christophe Cerisara View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have shown promising results in a variety of literary tasks, often using complex memorized details of narration and fictional characters. In this work, we evaluate the ability of Llama-3 at attributing utterances of direct-speech to their speaker in novels. The LLM shows impressive results on…
Read More
AirRadar: Inferring Nationwide Air Quality in China with Deep Neural Networks

AirRadar: Inferring Nationwide Air Quality in China with Deep Neural Networks

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
Accelerate High-Quality Diffusion Models with Inner Loop Feedback

Accelerate High-Quality Diffusion Models with Inner Loop Feedback

[Submitted on 22 Jan 2025 (v1), last revised 23 Jan 2025 (this version, v2)] View a PDF of the paper titled Accelerate High-Quality Diffusion Models with Inner Loop Feedback, by Matthew Gwilliam and 4 other authors View PDF HTML (experimental) Abstract:We propose Inner Loop Feedback (ILF), a novel approach to accelerate diffusion models' inference. ILF trains a lightweight module to predict future features in the denoising process by leveraging the outputs from a chosen diffusion backbone block at a given time step. This approach exploits two key intuitions; (1) the outputs of a given block at adjacent time steps are…
Read More
RAG-Reward: Optimizing RAG with Reward Modeling and RLHF

RAG-Reward: Optimizing RAG with Reward Modeling and RLHF

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
S-LoRA: Scalable Low-Rank Adaptation for Class Incremental Learning

S-LoRA: Scalable Low-Rank Adaptation for Class Incremental Learning

arXiv:2501.13198v1 Announce Type: new Abstract: Continual Learning (CL) with foundation models has recently emerged as a promising approach to harnessing the power of pre-trained models for sequential tasks. Existing prompt-based methods generally use a gating mechanism to select relevant prompts aligned with the test query for further processing. However, the success of these methods largely depends on the precision of the gating mechanism, which becomes less scalable with additional computational overhead as tasks increases. To overcome these issues, we propose a Scalable Low-Rank Adaptation (S-LoRA) method for CL (in particular class incremental learning), which incrementally decouples the learning of the…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.