BALI—A Benchmark for Accelerated Language Model Inference
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
The rise of Large Language Models (LLMs) has revolutionized natural language processing, enabling advancements across diverse applications, including chatbots, live translators, content generation, virtual assistants, and domain-specific automation tools. These applications rely on real-time or near-real-time responses to process sequential LLM requests, creating a critical demand for efficient and accelerated inference. These developments have led to numerous frameworks optimizing inference speed and resource utilization. However, they are often mutually incomparable or are inadequately described due to the lack of standardized benchmarks. Consequently, there is a notable lack of comparison frameworks due to the vast configuration space, bounded factors such as hardware specifications, inference framework parameters, and dataset variations. We propose BALI, an open-source Benchmark for Accelerated Language Model Inference, aiming to provide comprehensive analysis and standardized evaluation metrics to enhance the comparability of LLM performance across configurations. With BALI, we propose substantial measurements to evaluate and rank the efficiency of LLM frameworks across multiple aspects, including sequential decoding, parallelization, and setup efficiency. We show results for mainly small to medium-size models (1-30B parameters) in a sequential or non-batched setup, which is highly relevant for various real-time LLM applications. These observations reveal that the design decisions for such a framework constitute an application-dependent and multidimensional challenge. Thus, our objective is to provide LLM an inference benchmark with a clearly defined evaluation, incorporating multidimensional criteria to provide comparable performance assessments.
Details
| Original language | English |
|---|---|
| Pages (from-to) | 98976-98989 |
| Number of pages | 14 |
| Journal | IEEE access |
| Volume | 13 |
| Publication status | Published - 5 Jun 2025 |
| Peer-reviewed | Yes |
External IDs
| ORCID | /0000-0001-7047-3813/work/191041795 |
|---|
Keywords
ASJC Scopus subject areas
Keywords
- generation speed, inference standardization, LLM inference, LLM inference benchmarking, performance analysis, transformer decoder