Viral News

On the Convergence of Multi-objective Optimization under Generalized Smoothness

On the Convergence of Multi-objective Optimization under Generalized Smoothness

arXiv:2405.19440v1 Announce Type: new Abstract: Multi-objective optimization (MOO) is receiving more attention in various fields such as multi-task learning. Recent works provide some effective algorithms with theoretical analysis but they are limited by the standard $L$-smooth or bounded-gradient assumptions, which are typically unsatisfactory for neural networks, such as recurrent neural networks (RNNs) and transformers. In this paper, we study a more general and realistic class of $ell$-smooth loss functions, where $ell$ is a general non-decreasing function of gradient norm. We develop two novel single-loop algorithms for $ell$-smooth MOO problems, Generalized Smooth Multi-objective Gradient descent (GSMGrad) and its stochastic variant, Stochastic…
Read More
Diffusion Policy Attacker: Crafting Adversarial Attacks for Diffusion-based Policies

Diffusion Policy Attacker: Crafting Adversarial Attacks for Diffusion-based Policies

arXiv:2405.19424v1 Announce Type: new Abstract: Diffusion models (DMs) have emerged as a promising approach for behavior cloning (BC). Diffusion policies (DP) based on DMs have elevated BC performance to new heights, demonstrating robust efficacy across diverse tasks, coupled with their inherent flexibility and ease of implementation. Despite the increasing adoption of DP as a foundation for policy generation, the critical issue of safety remains largely unexplored. While previous attempts have targeted deep policy networks, DP used diffusion models as the policy network, making it ineffective to be attacked using previous methods because of its chained structure and randomness injected. In…
Read More
Critical Learning Periods: Leveraging Early Training Dynamics for Efficient Data Pruning

Critical Learning Periods: Leveraging Early Training Dynamics for Efficient Data Pruning

[Submitted on 29 May 2024] View a PDF of the paper titled Critical Learning Periods: Leveraging Early Training Dynamics for Efficient Data Pruning, by Everlyn Asiko Chimoto and 5 other authors View PDF HTML (experimental) Abstract:Neural Machine Translation models are extremely data and compute-hungry. However, not all data points contribute equally to model training and generalization. Data pruning to remove the low-value data points has the benefit of drastically reducing the compute budget without significant drop in model performance. In this paper, we propose a new data pruning technique: Checkpoints Across Time (CAT), that leverages early model training dynamics to…
Read More
How Delta Sharing Enables Secure End-to-End Collaboration | Databricks Blog

How Delta Sharing Enables Secure End-to-End Collaboration | Databricks Blog

In today's digital landscape, secure data sharing is critical to operational efficiency and innovation. Databricks and the Linux Foundation developed Delta Sharing as the first open source approach to data sharing across data, analytics and AI.  Databricks provides secure data exchange, facilitating seamless sharing across platforms, clouds and regions. Enterprises of all sizes trust Delta Sharing, which supports a broad spectrum of applications and diverse data formats. This flexibility makes it a reliable tool for organizations seeking to harness the full potential of their data assets.In this blog, we will review Delta Sharing's security architecture through three different sharing scenarios—…
Read More
Using Contrastive Learning with Generative Similarity to Learn Spaces that Capture Human Inductive Biases

Using Contrastive Learning with Generative Similarity to Learn Spaces that Capture Human Inductive Biases

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
Evaluating Vision-Language Models on Bistable Images

Evaluating Vision-Language Models on Bistable Images

[Submitted on 29 May 2024] View a PDF of the paper titled Evaluating Vision-Language Models on Bistable Images, by Artemis Panagopoulou and 2 other authors View PDF HTML (experimental) Abstract:Bistable images, also known as ambiguous or reversible images, present visual stimuli that can be seen in two distinct interpretations, though not simultaneously by the observer. In this study, we conduct the most extensive examination of vision-language models using bistable images to date. We manually gathered a dataset of 29 bistable images, along with their associated labels, and subjected them to 116 different manipulations in brightness, tint, and rotation. We evaluated…
Read More
Beyond Agreement: Diagnosing the Rationale Alignment of Automated Essay Scoring Methods based on Linguistically-informed Counterfactuals

Beyond Agreement: Diagnosing the Rationale Alignment of Automated Essay Scoring Methods based on Linguistically-informed Counterfactuals

arXiv:2405.19433v1 Announce Type: new Abstract: While current automated essay scoring (AES) methods show high agreement with human raters, their scoring mechanisms are not fully explored. Our proposed method, using counterfactual intervention assisted by Large Language Models (LLMs), reveals that when scoring essays, BERT-like models primarily focus on sentence-level features, while LLMs are attuned to conventions, language complexity, as well as organization, indicating a more comprehensive alignment with scoring rubrics. Moreover, LLMs can discern counterfactual interventions during feedback. Our approach improves understanding of neural AES methods and can also apply to other domains seeking transparency in model-driven decisions. The codes and…
Read More
The Past, Present, and Future of Data Quality Management

The Past, Present, and Future of Data Quality Management

Data quality monitoring. Data testing. Data observability. Say that five times fast.  Are they different words for the same thing? Unique approaches to the same problem? Something else entirely? And more importantly-do you really need all three? Like everything in data engineering, data quality management is evolving at lightning speed. The meteoric rise of data and AI in the enterprise has made data quality a zero day risk for modern businesses-and THE problem to solve for data teams. With so much overlapping terminology, it's not always clear how it all fits together-or if it fits together.  But contrary to what…
Read More
Announcing the General Availability of Row and Column Level Security with Databricks Unity Catalog

Announcing the General Availability of Row and Column Level Security with Databricks Unity Catalog

We are excited to announce the general availability of Row Filters and Column Masks in Unity Catalog on AWS, Azure and GCP! Managing fine-grained access controls on rows and columns in tables is critical to ensure data security and meet compliance. With Unity Catalog, you can use standard SQL functions to define row filters and column masks, allowing fine-grained access controls on rows and columns. Row Filters let you control which subsets of your tables' rows are visible to hierarchies of groups and users within your organization. Column Masks let you redact your table values based on the same dimensions.…
Read More
Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning

Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning

[Submitted on 29 May 2024] View a PDF of the paper titled Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning, by Alexander Politowicz and 2 other authors View PDF HTML (experimental) Abstract:Designing Reinforcement Learning (RL) solutions for real-life problems remains a significant challenge. A major area of concern is safety. "Shielding" is a popular technique to enforce safety in RL by turning user-defined safety specifications into safe agent behavior. However, these methods either suffer from extreme learning delays, demand extensive human effort in designing models and safe domains in the problem, or require pre-computation. In this paper,…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.