View a PDF of the paper titled OrionBench: Benchmarking Time Series Generative Models in the Service of the End-User, by Sarah Alnegheimish and 2 other authors
Abstract:Time series anomaly detection is a vital task in many domains, including patient monitoring in healthcare, forecasting in finance, and predictive maintenance in energy industries. This has led to a proliferation of anomaly detection methods, including deep learning-based methods. Benchmarks are essential for comparing the performances of these models as they emerge, in a fair, rigorous, and reproducible approach. Although several benchmarks for comparing models have been proposed, these usually rely on a one-time execution over a limited set of datasets, with comparisons restricted to a few models. We propose OrionBench: an end-user centric, continuously maintained benchmarking framework for unsupervised time series anomaly detection models. Our framework provides universal abstractions to represent models, hyperparameter standardization, extensibility to add new pipelines and datasets, pipeline verification, and frequent releases with published updates of the benchmark. We demonstrate how to use OrionBench, and the performance of pipelines across 17 releases published over the course of four years. We also walk through two real scenarios we experienced with OrionBench that highlight the importance of continuous benchmarking for unsupervised time series anomaly detection.
Submission history
From: Sarah Alnegheimish [view email]
[v1]
Thu, 26 Oct 2023 19:43:16 UTC (705 KB)
[v2]
Mon, 4 Mar 2024 20:39:19 UTC (2,627 KB)
[v3]
Sun, 24 Nov 2024 21:55:00 UTC (3,310 KB)
Source link
lol