BALI—A Benchmark for Accelerated Language Model Inference

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

Abstract

The rise of Large Language Models (LLMs) has revolutionized natural language processing, enabling advancements across diverse applications, including chatbots, live translators, content generation, virtual assistants, and domain-specific automation tools. These applications rely on real-time or near-real-time responses to process sequential LLM requests, creating a critical demand for efficient and accelerated inference. These developments have led to numerous frameworks optimizing inference speed and resource utilization. However, they are often mutually incomparable or are inadequately described due to the lack of standardized benchmarks. Consequently, there is a notable lack of comparison frameworks due to the vast configuration space, bounded factors such as hardware specifications, inference framework parameters, and dataset variations. We propose BALI, an open-source Benchmark for Accelerated Language Model Inference, aiming to provide comprehensive analysis and standardized evaluation metrics to enhance the comparability of LLM performance across configurations. With BALI, we propose substantial measurements to evaluate and rank the efficiency of LLM frameworks across multiple aspects, including sequential decoding, parallelization, and setup efficiency. We show results for mainly small to medium-size models (1-30B parameters) in a sequential or non-batched setup, which is highly relevant for various real-time LLM applications. These observations reveal that the design decisions for such a framework constitute an application-dependent and multidimensional challenge. Thus, our objective is to provide LLM an inference benchmark with a clearly defined evaluation, incorporating multidimensional criteria to provide comparable performance assessments.

Details

Original languageEnglish
Pages (from-to)98976-98989
Number of pages14
JournalIEEE access
Volume13
Publication statusPublished - 5 Jun 2025
Peer-reviewedYes

External IDs

ORCID /0000-0001-7047-3813/work/191041795

Keywords

Keywords

  • generation speed, inference standardization, LLM inference, LLM inference benchmarking, performance analysis, transformer decoder