Viral News

Evaluating the Adversarial Robustness of Retrieval-Based In-Context Learning for Large Language Models

Evaluating the Adversarial Robustness of Retrieval-Based In-Context Learning for Large Language Models

arXiv:2405.15984v1 Announce Type: new Abstract: With the emergence of large language models, such as LLaMA and OpenAI GPT-3, In-Context Learning (ICL) gained significant attention due to its effectiveness and efficiency. However, ICL is very sensitive to the choice, order, and verbaliser used to encode the demonstrations in the prompt. Retrieval-Augmented ICL methods try to address this problem by leveraging retrievers to extract semantically related examples as demonstrations. While this approach yields more accurate results, its robustness against various types of adversarial attacks, including perturbations on test samples, demonstrations, and retrieved data, remains under-explored. Our study reveals that retrieval-augmented models can…
Read More
Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications

Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications

arXiv:2405.15877v1 Announce Type: new Abstract: Large language models (LLMs) significantly enhance the performance of various applications, but they are computationally intensive and energy-demanding. This makes it challenging to deploy them on devices with limited resources, such as personal computers and mobile/wearable devices, and results in substantial inference costs in resource-rich environments like cloud servers. To extend the use of LLMs, we introduce a low-rank decomposition approach to effectively compress these models, tailored to the requirements of specific applications. We observe that LLMs pretrained on general datasets contain many redundant components not needed for particular applications. Our method focuses on identifying…
Read More
Efficient Point Transformer with Dynamic Token Aggregating for Point Cloud Processing

Efficient Point Transformer with Dynamic Token Aggregating for Point Cloud Processing

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
A hierarchical Bayesian model for syntactic priming

A hierarchical Bayesian model for syntactic priming

arXiv:2405.15964v1 Announce Type: new Abstract: The effect of syntactic priming exhibits three well-documented empirical properties: the lexical boost, the inverse frequency effect, and the asymmetrical decay. We aim to show how these three empirical phenomena can be reconciled in a general learning framework, the hierarchical Bayesian model (HBM). The model represents syntactic knowledge in a hierarchical structure of syntactic statistics, where a lower level represents the verb-specific biases of syntactic decisions, and a higher level represents the abstract bias as an aggregation of verb-specific biases. This knowledge is updated in response to experience by Bayesian inference. In simulations, we show…
Read More
J.P. Morgan Launches ‘Containerized Data’ Solution in the Cloud

J.P. Morgan Launches ‘Containerized Data’ Solution in the Cloud

(Tee11/Shutterstock) Getting access to consistent, high-quality data ranks as one of the toughest challenges in big data, advanced analytics, and AI. It’s a challenge that is being taken up by Fusion by J.P. Morgan with its new Containerized Data offering, which provides institutional investors with consistent, enriched data that’s been standardized with a common semantic layer. The worst-kept secret in big data is that data prep consumes the vast majority of time in analytics, machine learning, and AI projects. Raw data does contain signals that data scientists so desperately want to leverage for competitive gain, but that data must be…
Read More
5 Ways to Overcome Headwinds in Supply Chain Efficiency

5 Ways to Overcome Headwinds in Supply Chain Efficiency

The post-pandemic recovery was a major shock to the supply chain landscape. The emergence of varied and powerful headwinds saw many lingering inefficiencies exposed as firms scrambled to maintain inventory levels against the backdrop of an uneven recovery from the health crisis, geopolitical unrest, environmental concerns, and staffing shortages to name a few. Legacy processes have been adversely impacted by changing consumer sentiment, and it's becoming increasingly clear that digital transformation is essential in helping businesses at all ends of the chain overcome mounting pressures. Pressure Amid Mounting Headwinds Supply chain issues can be varied. The lockdowns driven by the…
Read More
Social Impact Using Data and AI: Revealing the 2024 Finalists for the Data For Good Award

Social Impact Using Data and AI: Revealing the 2024 Finalists for the Data For Good Award

The annual Data Team Awards celebrate the critical contributions of data teams to various sectors, spotlighting their role in driving progress and positive change within their organizations.This year, we've seen an exceptional number of more than 200 nominations from around the world, emphasizing the widespread impact of innovation in both data science and artificial intelligence. With the Data + AI Summit around the corner, we are excited to feature the innovators in our six award categories and highlight their remarkable journeys to data-led breakthroughs.The Data for Good Award honors teams that have harnessed the power of data, analytics, and AI…
Read More
CausalConceptTS: Causal Attributions for Time Series Classification using High Fidelity Diffusion Models

CausalConceptTS: Causal Attributions for Time Series Classification using High Fidelity Diffusion Models

[Submitted on 24 May 2024] View a PDF of the paper titled CausalConceptTS: Causal Attributions for Time Series Classification using High Fidelity Diffusion Models, by Juan Miguel Lopez Alcaraz and 1 other authors View PDF HTML (experimental) Abstract:Despite the excelling performance of machine learning models, understanding the decisions of machine learning models remains a long-standing goal. While commonly used attribution methods in explainable AI attempt to address this issue, they typically rely on associational rather than causal relationships. In this study, within the context of time series classification, we introduce a novel framework to assess the causal effect of concepts,…
Read More
3D Learnable Supertoken Transformer for LiDAR Point Cloud Scene Segmentation

3D Learnable Supertoken Transformer for LiDAR Point Cloud Scene Segmentation

arXiv:2405.15826v1 Announce Type: new Abstract: 3D Transformers have achieved great success in point cloud understanding and representation. However, there is still considerable scope for further development in effective and efficient Transformers for large-scale LiDAR point cloud scene segmentation. This paper proposes a novel 3D Transformer framework, named 3D Learnable Supertoken Transformer (3DLST). The key contributions are summarized as follows. Firstly, we introduce the first Dynamic Supertoken Optimization (DSO) block for efficient token clustering and aggregating, where the learnable supertoken definition avoids the time-consuming pre-processing of traditional superpoint generation. Since the learnable supertokens can be dynamically optimized by multi-level deep features…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.