ProofTeller: Exposing recency bias in LLM reasoning and its side effects on communication
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
Large language models (LLMs) are increasingly applied in domains that demand reliable and interpretable reasoning. While formal methods can generate provably correct proofs, these proofs are often inaccessible to non-expert users. This raises a natural question: can LLMs, when given a verified proof, faithfully interpret its reasoning and communicate it clearly? We introduce ProofTeller, a benchmark that evaluates this ability across three tasks:
(1) identifying key proof steps,
(2) summarizing the reasoning, and
(3) explaining the result in concise natural language.
The benchmark covers three domains: Biology, Drones, and Recipes, representing scientific, safety-critical, and everyday reasoning scenarios. We find a consistent near-conclusion bias: LLMs tend to focus on steps closest to the final proof conclusion rather than on the most informative ones. A targeted human study confirms that explanations based on such steps are rated less appropriate for end users.
These findings indicate that even when reasoning is provided, current LLMs face challenges in communicating key information in a useful manner, highlighting the need for LLMs that can communicate important details reliably.
(1) identifying key proof steps,
(2) summarizing the reasoning, and
(3) explaining the result in concise natural language.
The benchmark covers three domains: Biology, Drones, and Recipes, representing scientific, safety-critical, and everyday reasoning scenarios. We find a consistent near-conclusion bias: LLMs tend to focus on steps closest to the final proof conclusion rather than on the most informative ones. A targeted human study confirms that explanations based on such steps are rated less appropriate for end users.
These findings indicate that even when reasoning is provided, current LLMs face challenges in communicating key information in a useful manner, highlighting the need for LLMs that can communicate important details reliably.
Details
| Originalsprache | Englisch |
|---|---|
| Titel | International Joint Conference on Natural Language Processing & Asia-Pacific Chapter of the Association for Computational Linguistics |
| Publikationsstatus | Veröffentlicht - 2025 |
| Peer-Review-Status | Ja |