View a PDF of the paper titled Harnessing Webpage UIs for Text-Rich Visual Understanding, by Junpeng Liu and 8 other authors
Abstract:Text-rich visual understanding-the ability to process environments where dense textual content is integrated with visuals-is crucial for multimodal large language models (MLLMs) to interact effectively with structured environments. To enhance this capability, we propose synthesizing general multimodal instructions from webpage UIs using text-based large language models (LLMs). Despite lacking direct visual input, text-based LLMs are able to process structured text representations from webpage accessibility trees. These instructions are then paired with UI screenshots to train multimodal models. We introduce MultiUI, a dataset containing 7.3 million samples from 1 million websites, covering diverse multimodal tasks and UI layouts. Models trained on MultiUI not only excel in web UI tasks-achieving up to a 48% improvement on VisualWebBench and a 19.1% boost in element accuracy on a web agent dataset Mind2Web-but also generalize surprisingly well to non-web UI tasks and even to non-UI domains, such as document understanding, OCR, and chart interpretation. These results highlight the broad applicability of web UI data for advancing text-rich visual understanding across various scenarios.
Submission history
From: Junpeng Liu [view email]
[v1]
Thu, 17 Oct 2024 17:48:54 UTC (6,944 KB)
[v2]
Fri, 18 Oct 2024 09:01:01 UTC (6,937 KB)
[v3]
Wed, 6 Nov 2024 08:29:22 UTC (6,937 KB)
Source link
lol