Viral News

Towards improving Alzheimer’s intervention: a machine learning approach for biomarker detection through combining MEG and MRI pipelines

Towards improving Alzheimer’s intervention: a machine learning approach for biomarker detection through combining MEG and MRI pipelines

[Submitted on 9 Aug 2024] View a PDF of the paper titled Towards improving Alzheimer's intervention: a machine learning approach for biomarker detection through combining MEG and MRI pipelines, by Alwani Liyana Ahmad and 5 other authors View PDF Abstract:MEG are non invasive neuroimaging techniques with excellent temporal and spatial resolution, crucial for studying brain function in dementia and Alzheimer Disease. They identify changes in brain activity at various Alzheimer stages, including preclinical and prodromal phases. MEG may detect pathological changes before clinical symptoms, offering potential biomarkers for intervention. This study evaluates classification techniques using MEG features to distinguish between…
Read More
Rethinking Multiple Instance Learning: Developing an Instance-Level Classifier via Weakly-Supervised Self-Training

Rethinking Multiple Instance Learning: Developing an Instance-Level Classifier via Weakly-Supervised Self-Training

arXiv:2408.04813v1 Announce Type: new Abstract: Multiple instance learning (MIL) problem is currently solved from either bag-classification or instance-classification perspective, both of which ignore important information contained in some instances and result in limited performance. For example, existing methods often face difficulty in learning hard positive instances. In this paper, we formulate MIL as a semi-supervised instance classification problem, so that all the labeled and unlabeled instances can be fully utilized to train a better classifier. The difficulty in this formulation is that all the labeled instances are negative in MIL, and traditional self-training techniques used in semi-supervised learning tend to…
Read More
Efficacy of Large Language Models in Systematic Reviews

Efficacy of Large Language Models in Systematic Reviews

arXiv:2408.04646v1 Announce Type: new Abstract: This study investigates the effectiveness of Large Language Models (LLMs) in interpreting existing literature through a systematic review of the relationship between Environmental, Social, and Governance (ESG) factors and financial performance. The primary objective is to assess how LLMs can replicate a systematic review on a corpus of ESG-focused papers. We compiled and hand-coded a database of 88 relevant papers published from March 2020 to May 2024. Additionally, we used a set of 238 papers from a previous systematic review of ESG literature from January 2015 to February 2020. We evaluated two current state-of-the-art LLMs,…
Read More
On the Geometry of Deep Learning

On the Geometry of Deep Learning

[Submitted on 9 Aug 2024] View a PDF of the paper titled On the Geometry of Deep Learning, by Randall Balestriero and 2 other authors View PDF HTML (experimental) Abstract:In this paper, we overview one promising avenue of progress at the mathematical foundation of deep learning: the connection between deep networks and function approximation by affine splines (continuous piecewise linear functions in multiple dimensions). In particular, we will overview work over the past decade on understanding certain geometrical properties of a deep network's affine spline mapping, in particular how it tessellates its input space. As we will see, the affine…
Read More
UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling

UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling

arXiv:2408.04810v1 Announce Type: new Abstract: Significant research efforts have been made to scale and improve vision-language model (VLM) training approaches. Yet, with an ever-growing number of benchmarks, researchers are tasked with the heavy burden of implementing each protocol, bearing a non-trivial computational cost, and making sense of how all these benchmarks translate into meaningful axes of progress. To facilitate a systematic evaluation of VLM progress, we introduce UniBench: a unified implementation of 50+ VLM benchmarks spanning a comprehensive range of carefully categorized capabilities from object recognition to spatial awareness, counting, and much more. We showcase the utility of UniBench for…
Read More
Evaluating the Impact of Advanced LLM Techniques on AI-Lecture Tutors for a Robotics Course

Evaluating the Impact of Advanced LLM Techniques on AI-Lecture Tutors for a Robotics Course

arXiv:2408.04645v1 Announce Type: new Abstract: This study evaluates the performance of Large Language Models (LLMs) as an Artificial Intelligence-based tutor for a university course. In particular, different advanced techniques are utilized, such as prompt engineering, Retrieval-Augmented-Generation (RAG), and fine-tuning. We assessed the different models and applied techniques using common similarity metrics like BLEU-4, ROUGE, and BERTScore, complemented by a small human evaluation of helpfulness and trustworthiness. Our findings indicate that RAG combined with prompt engineering significantly enhances model responses and produces better factual answers. In the context of education, RAG appears as an ideal technique as it is based on…
Read More
AI and Machine Learning Driven Indoor Localization and Navigation with Mobile Embedded Systems

AI and Machine Learning Driven Indoor Localization and Navigation with Mobile Embedded Systems

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
Hyper-YOLO: When Visual Object Detection Meets Hypergraph Computation

Hyper-YOLO: When Visual Object Detection Meets Hypergraph Computation

arXiv:2408.04804v1 Announce Type: new Abstract: We introduce Hyper-YOLO, a new object detection method that integrates hypergraph computations to capture the complex high-order correlations among visual features. Traditional YOLO models, while powerful, have limitations in their neck designs that restrict the integration of cross-level features and the exploitation of high-order feature interrelationships. To address these challenges, we propose the Hypergraph Computation Empowered Semantic Collecting and Scattering (HGC-SCS) framework, which transposes visual feature maps into a semantic space and constructs a hypergraph for high-order message propagation. This enables the model to acquire both semantic and structural information, advancing beyond conventional feature-focused learning.…
Read More
Risks, Causes, and Mitigations of Widespread Deployments of Large Language Models (LLMs): A Survey

Risks, Causes, and Mitigations of Widespread Deployments of Large Language Models (LLMs): A Survey

arXiv:2408.04643v1 Announce Type: new Abstract: Recent advancements in Large Language Models (LLMs), such as ChatGPT and LLaMA, have significantly transformed Natural Language Processing (NLP) with their outstanding abilities in text generation, summarization, and classification. Nevertheless, their widespread adoption introduces numerous challenges, including issues related to academic integrity, copyright, environmental impacts, and ethical considerations such as data bias, fairness, and privacy. The rapid evolution of LLMs also raises concerns regarding the reliability and generalizability of their evaluations. This paper offers a comprehensive survey of the literature on these subjects, systematically gathered and synthesized from Google Scholar. Our study provides an in-depth…
Read More
Confident magnitude-based neural network pruning

Confident magnitude-based neural network pruning

arXiv:2408.04759v1 Announce Type: new Abstract: Pruning neural networks has proven to be a successful approach to increase the efficiency and reduce the memory storage of deep learning models without compromising performance. Previous literature has shown that it is possible to achieve a sizable reduction in the number of parameters of a deep neural network without deteriorating its predictive capacity in one-shot pruning regimes. Our work builds beyond this background in order to provide rigorous uncertainty quantification for pruning neural networks reliably, which has not been addressed to a great extent in previous literature focusing on pruning methods in computer vision…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.