LM-KBC 2025: 4th Challenge on Knowledge Base Construction from Pre-trained Language Models

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

Abstract

Pretrained language models (LMs) have significantly advanced a variety of semantic tasks and have shown promise as sources of knowledge elicitation. While prior work has studied this ability through probing or prompting, the potential of LMs for large-scale knowledge base construction remains underexplored. The fourth edition of the LM-KBC Challenge invited participants to build knowledge bases directly from LMs, given specific subjects and relations. Unlike existing probing benchmarks, the challenge imposed no simplifying assumptions on relation cardinality-allowing a subject entity to be linked to zero, one, or multiple object entities. To ensure accessibility, the challenge featured a single track based the same LLM to be used by all participants. Five submissions were received, which explored a variety of ideas from self-consistency, self-RAG, reasoning, and prompt optimization.

Details

Original languageEnglish
Title of host publicationKBC-LM Workshop and LM-KBC Challenge at ISWC 2025
EditorsSimon Razniewski, Jan-Christoph Kalo, Duygu Islakoğlu, Tuan-Phong Nguyen, Bohui Zhang
Number of pages7
Publication statusPublished - 2025
Peer-reviewedYes

Publication series

SeriesCEUR Workshop Proceedings
Volume4041
ISSN1613-0073

Other

Title4th challenge on Knowledge Base Construction from Pre-trained Language Models
Abbreviated titleLM-KBC 2025
Conference number4
Descriptionco-located with the 24th International Semantic Web Conference (ISWC 2025)
Duration2 November 2025
Website
LocationNara Prefectural Convention Center
CityNara
CountryJapan

External IDs

ORCID /0000-0002-5410-218X/work/194826583

Keywords

ASJC Scopus subject areas