BALI—A Benchmark for Accelerated Language Model Inference

Publikation: Beitrag in FachzeitschriftForschungsartikelBeigetragenBegutachtung

Beitragende

Abstract

The rise of Large Language Models (LLMs) has revolutionized natural language processing, enabling advancements across diverse applications, including chatbots, live translators, content generation, virtual assistants, and domain-specific automation tools. These applications rely on real-time or near-real-time responses to process sequential LLM requests, creating a critical demand for efficient and accelerated inference. These developments have led to numerous frameworks optimizing inference speed and resource utilization. However, they are often mutually incomparable or are inadequately described due to the lack of standardized benchmarks. Consequently, there is a notable lack of comparison frameworks due to the vast configuration space, bounded factors such as hardware specifications, inference framework parameters, and dataset variations. We propose BALI, an open-source Benchmark for Accelerated Language Model Inference, aiming to provide comprehensive analysis and standardized evaluation metrics to enhance the comparability of LLM performance across configurations. With BALI, we propose substantial measurements to evaluate and rank the efficiency of LLM frameworks across multiple aspects, including sequential decoding, parallelization, and setup efficiency. We show results for mainly small to medium-size models (1-30B parameters) in a sequential or non-batched setup, which is highly relevant for various real-time LLM applications. These observations reveal that the design decisions for such a framework constitute an application-dependent and multidimensional challenge. Thus, our objective is to provide LLM an inference benchmark with a clearly defined evaluation, incorporating multidimensional criteria to provide comparable performance assessments.

Details

OriginalspracheEnglisch
Seiten (von - bis)98976-98989
Seitenumfang14
FachzeitschriftIEEE access
Jahrgang13
PublikationsstatusVeröffentlicht - 5 Juni 2025
Peer-Review-StatusJa

Externe IDs

ORCID /0000-0001-7047-3813/work/191041795

Schlagworte

Schlagwörter

  • generation speed, inference standardization, LLM inference, LLM inference benchmarking, performance analysis, transformer decoder