View a PDF of the paper titled ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling, by Chenyu Huang and 7 other authors
Abstract:Optimization modeling plays a critical role in the application of Operations Research (OR) tools to address real-world problems, yet they pose challenges and require extensive expertise from OR experts. With the advent of large language models (LLMs), new opportunities have emerged to streamline and automate such task. However, current research predominantly relies on closed-source LLMs such as GPT-4, along with extensive prompt engineering techniques. This reliance stems from the scarcity of high-quality training datasets for optimization modeling, resulting in elevated costs, prolonged processing times, and privacy concerns. To address these challenges, our work is the first to propose a viable path for training open-source LLMs that are capable of optimization modeling and developing solver codes, eventually leading to a superior ability for automating optimization modeling and solving. Particularly, we introduce OR-Instruct, a semi-automated data synthesis framework for optimization modeling that enables customizable enhancements for specific scenarios or model types. We also introduce IndustryOR, the first industrial benchmark for evaluating LLMs in solving practical OR problems. We train several 7B-scale open-source LLMs using synthesized data (dubbed ORLMs{this https URL}), which exhibit significantly enhanced optimization modeling capabilities, achieving state-of-the-art performance across the NL4OPT, MAMO, and IndustryOR benchmarks. Additionally, our experiments highlight the potential of scaling law and reinforcement learning to further enhance the performance of ORLMs. The workflows and human-machine interaction paradigms of ORLMs in practical industrial applications are also discussed in the paper.
Submission history
From: Zhengyang Tang [view email]
[v1]
Tue, 28 May 2024 01:55:35 UTC (212 KB)
[v2]
Thu, 30 May 2024 02:12:05 UTC (212 KB)
[v3]
Fri, 15 Nov 2024 03:25:40 UTC (2,001 KB)
[v4]
Sun, 5 Jan 2025 14:35:49 UTC (1,868 KB)
Source link
lol