QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios

Research output: Contribution to conferencesPaperContributedpeer-review

Contributors

Abstract

Reasoning is key to many decision making processes. It requires consolidating a set of rule-like premises that are often associated with degrees of uncertainty and observations to draw conclusions. In this work, we address both the case where premises are specified as numeric probabilistic rules and situations in which humans state their estimates using words expressing degrees of certainty. Existing probabilistic reasoning datasets simplify the task, e.g., by requiring the model to only rank textual alternatives, by including only binary random variables, or by making use of a limited set of templates that result in less varied text. In this work, we present QUITE, a question answering dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships. QUITE provides high-quality natural language verbalizations of premises together with evidence statements, and expects the answer to a question in the form of an estimated probability. We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types (causal, evidential, and explaining-away). Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning. We release QUITE and code for training and experiments on Github.

Details

Original languageEnglish
Pages2634-2652
Number of pages19
Publication statusPublished - 2024
Peer-reviewedYes

Conference

Title2024 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2024
Duration12 - 16 November 2024
Website
Degree of recognitionInternational event
LocationHyatt Regency Miami Hotel & Online
CityMiami
CountryUnited States of America

External IDs

ORCID /0000-0002-5410-218X/work/173517502
Scopus 85217809240