Viral News

Bi-DCSpell: A Bi-directional Detector-Corrector Interactive Framework for Chinese Spelling Check

Bi-DCSpell: A Bi-directional Detector-Corrector Interactive Framework for Chinese Spelling Check

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities

Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities

arXiv:2406.01757v1 Announce Type: new Abstract: The rise of foundation models holds immense promise for advancing AI, but this progress may amplify existing risks and inequalities, leaving marginalized communities behind. In this position paper, we discuss that disparities towards marginalized communities - performance, representation, privacy, robustness, interpretability and safety - are not isolated concerns but rather interconnected elements of a cascading disparity phenomenon. We contrast foundation models with traditional models and highlight the potential for exacerbated disparity against marginalized communities. Moreover, we emphasize the unique threat of cascading impacts in foundation models, where interconnected disparities can trigger long-lasting negative consequences, specifically…
Read More
L-MAGIC: Language Model Assisted Generation of Images with Coherence

L-MAGIC: Language Model Assisted Generation of Images with Coherence

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
CR-UTP: Certified Robustness against Universal Text Perturbations

CR-UTP: Certified Robustness against Universal Text Perturbations

arXiv:2406.01873v1 Announce Type: new Abstract: It is imperative to ensure the stability of every prediction made by a language model; that is, a language's prediction should remain consistent despite minor input variations, like word substitutions. In this paper, we investigate the problem of certifying a language model's robustness against Universal Text Perturbations (UTPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing certified robustness based on random smoothing has shown considerable promise in certifying the input-specific text perturbations (ISTPs), operating under the assumption that any random alteration of a sample's clean or adversarial words would negate…
Read More
Sparser, Better, Deeper, Stronger: Improving Sparse Training with Exact Orthogonal Initialization

Sparser, Better, Deeper, Stronger: Improving Sparse Training with Exact Orthogonal Initialization

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
Boosting Vision-Language Models with Transduction

Boosting Vision-Language Models with Transduction

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Source link lol
Read More
#EpiTwitter: Public Health Messaging During the COVID-19 Pandemic

#EpiTwitter: Public Health Messaging During the COVID-19 Pandemic

arXiv:2406.01866v1 Announce Type: new Abstract: Effective communication during health crises is critical, with social media serving as a key platform for public health experts (PHEs) to engage with the public. However, it also amplifies pseudo-experts promoting contrarian views. Despite its importance, the role of emotional and moral language in PHEs' communication during COVID-19 remains under explored. This study examines how PHEs and pseudo-experts communicated on Twitter during the pandemic, focusing on emotional and moral language and their engagement with political elites. Analyzing tweets from 489 PHEs and 356 pseudo-experts from January 2020 to January 2021, alongside public responses, we identified…
Read More
Celebrating Achievements in Data Intelligence: Presenting the 2024 Databricks Data Intelligence Award Finalists

Celebrating Achievements in Data Intelligence: Presenting the 2024 Databricks Data Intelligence Award Finalists

The annual Data Team Awards spotlight data teams and the pivotal role they play in business operations across industries and markets. By continually raising the bar, these innovators demonstrate the technology and ingenuity needed to thrive in today’s business world.With more than 200 nominations from around the world, the Data Team Awards underscore the breadth of innovation happening in the data and artificial intelligence spheres. As we look forward to the Data + AI Summit, Databricks is gearing up to showcase these trailblazers and share their journeys of data-driven transformation and innovation.The Data Team Data Intelligence Award honors teams that…
Read More
Optimizing the Optimal Weighted Average: Efficient Distributed Sparse Classification

Optimizing the Optimal Weighted Average: Efficient Distributed Sparse Classification

arXiv:2406.01753v1 Announce Type: new Abstract: While distributed training is often viewed as a solution to optimizing linear models on increasingly large datasets, inter-machine communication costs of popular distributed approaches can dominate as data dimensionality increases. Recent work on non-interactive algorithms shows that approximate solutions for linear models can be obtained efficiently with only a single round of communication among machines. However, this approximation often degenerates as the number of machines increases. In this paper, building on the recent optimal weighted average method, we introduce a new technique, ACOWA, that allows an extra round of communication to achieve noticeably better approximation…
Read More

Finding Lottery Tickets in Vision Models via Data-driven Spectral Foresight Pruning

arXiv:2406.01820v1 Announce Type: new Abstract: Recent advances in neural network pruning have shown how it is possible to reduce the computational costs and memory demands of deep learning models before training. We focus on this framework and propose a new pruning at initialization algorithm that leverages the Neural Tangent Kernel (NTK) theory to align the training dynamics of the sparse network with that of the dense one. Specifically, we show how the usually neglected data-dependent component in the NTK's spectrum can be taken into account by providing an analytical upper bound to the NTK's trace obtained by decomposing neural networks…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.