MH-MoE: Multi-Head Mixture-of-Experts

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled MH-MoE: Multi-Head Mixture-of-Experts, by Shaohan Huang and 3 other authors

View PDF
HTML (experimental)

Abstract:Multi-Head Mixture-of-Experts (MH-MoE) demonstrates superior performance by using the multi-head mechanism to collectively attend to information from various representation spaces within different experts. In this paper, we present a novel implementation of MH-MoE that maintains both FLOPs and parameter parity with sparse Mixture of Experts models. Experimental results on language models show that the new implementation yields quality improvements over both vanilla MoE and fine-grained MoE models. Additionally, our experiments demonstrate that MH-MoE is compatible with 1-bit Large Language Models (LLMs) such as BitNet.

Submission history

From: Shaohan Huang [view email]
[v1]
Mon, 25 Nov 2024 09:05:36 UTC (642 KB)
[v2]
Tue, 26 Nov 2024 06:28:54 UTC (642 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.