Multi-Modal Parameter-Efficient Fine-tuning via Graph Neural Network

Multi-Modal Parameter-Efficient Fine-tuning via Graph Neural Network

arXiv:2408.00290v1 Announce Type: new Abstract: With the advent of the era of foundation models, pre-training and fine-tuning have become common paradigms. Recently, parameter-efficient fine-tuning has garnered widespread attention due to its better balance between the number of learnable parameters and performance. However, some current parameter-efficient fine-tuning methods only model a single modality and lack the utilization of structural knowledge in downstream tasks. To address this issue, this paper proposes a multi-modal parameter-efficient fine-tuning method based on graph networks. Each image is fed into a multi-modal large language model (MLLM) to generate a text description. The image and its corresponding text…
Read More
The GenAI jitters: Is there enough demand for $1 trillion in AI spending?

The GenAI jitters: Is there enough demand for $1 trillion in AI spending?

Business Insider chief tech correspondent Ashley Stewart wrote a story this week based on what seemed like a pretty innocuous topic.It was about a CIO at a pharmaceutical company who had employees try Microsoft's 365 Copilot AI-powered productivity software. After 6 months, he cancelled because the new features weren't worth the extra money. You can read the article here.The piece really grabbed BI subscribers' attention, and I've been thinking about why.I've realized this anecdote goes to the heart of the generative AI boom — and whether it can continue.Big tech companies are forecast to spend $1 trillion on data centers,…
Read More
Downstream bias mitigation is all you need

Downstream bias mitigation is all you need

arXiv:2408.00612v1 Announce Type: new Abstract: The advent of transformer-based architectures and large language models (LLMs) have significantly advanced the performance of natural language processing (NLP) models. Since these LLMs are trained on huge corpuses of data from the web and other sources, there has been a major concern about harmful prejudices that may potentially be transferred from the data. In many applications, these pre-trained LLMs are fine-tuned on task specific datasets, which can further contribute to biases. This paper studies the extent of biases absorbed by LLMs during pre-training as well as task-specific behaviour after fine-tuning. We found that controlled…
Read More
Google will no longer air an Olympics ad that showed a child using AI to write a fan letter

Google will no longer air an Olympics ad that showed a child using AI to write a fan letter

Google is phasing out an Olympics ad after for its AI-powered chatbot, Gemini, after receiving widespread criticism of showing a father use AI to help his daughter write a fan letter to her favorite athlete. The 60-second commercial, which is still available on YouTube, shows a father using Gemini to write a fan letter to an idol, Olympic track star Sydney McLaughlin-Levrone, on behalf of his young daughter.“She wants to show Sydney some love and I am pretty good with words, but this has to be just right,” the dad says in the commercial. “So Gemini, help my daughter write…
Read More
Database: Seeding

Database: Seeding

Database Seeding pada Laravel digunakan untuk mengisi database dengan data awal atau data dummy. Ini sangat berguna untuk pengembangan dan pengujian aplikasi, karena memungkinkan pengembang untuk dengan cepat mengisi database dengan data yang diperlukan untuk menjalankan aplikasi atau melakukan pengujian. Berikut adalah beberapa kegunaan utama dari Database Seeding pada Laravel: A. Mengisi Database dengan Data Awal: Seeder memungkinkan kamu untuk mengisi database dengan data default yang diperlukan oleh aplikasi. // database/seeders/DatabaseSeeder.php namespace DatabaseSeeders; use IlluminateDatabaseSeeder; use AppModelsUser; class DatabaseSeeder extends Seeder { public function run() { User::factory()->count(50)->create(); } } Enter fullscreen mode Exit fullscreen mode B. Membuat Data Dummy untuk…
Read More
The US is deploying more warships able to shoot down ballistic missiles to keep Iran and its friends in check

The US is deploying more warships able to shoot down ballistic missiles to keep Iran and its friends in check

The US is deploying additional warships able to shoot down ballistic missiles to the Middle East and nearby waters, a Pentagon spokesperson said Friday, as the region braces for a potential attack on Israel by Iran and its proxies.Sabrina Singh, the deputy press secretary, said the Pentagon is changing up the US military's force posture "to mitigate the possibility of regional escalation by Iran or Iran's partners and proxies."Earlier this week, Israel killed a top Hezbollah commander in Lebanon, and hours later, it became the leading suspect in the assassination of Hamas' political chief in Iran.The stunning, back-to-back killings sparked…
Read More
Exploiting Preferences in Loss Functions for Sequential Recommendation via Weak Transitivity

Exploiting Preferences in Loss Functions for Sequential Recommendation via Weak Transitivity

arXiv:2408.00326v1 Announce Type: new Abstract: A choice of optimization objective is immensely pivotal in the design of a recommender system as it affects the general modeling process of a user's intent from previous interactions. Existing approaches mainly adhere to three categories of loss functions: pairwise, pointwise, and setwise loss functions. Despite their effectiveness, a critical and common drawback of such objectives is viewing the next observed item as a unique positive while considering all remaining items equally negative. Such a binary label assignment is generally limited to assuring a higher recommendation score of the positive item, neglecting potential structures induced…
Read More
Gradient Harmonization in Unsupervised Domain Adaptation

Gradient Harmonization in Unsupervised Domain Adaptation

arXiv:2408.00288v1 Announce Type: new Abstract: Unsupervised domain adaptation (UDA) intends to transfer knowledge from a labeled source domain to an unlabeled target domain. Many current methods focus on learning feature representations that are both discriminative for classification and invariant across domains by simultaneously optimizing domain alignment and classification tasks. However, these methods often overlook a crucial challenge: the inherent conflict between these two tasks during gradient-based optimization. In this paper, we delve into this issue and introduce two effective solutions known as Gradient Harmonization, including GH and GH++, to mitigate the conflict between domain alignment and classification tasks. GH operates…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.