LM-KBC 2025: 4th Challenge on Knowledge Base Construction from Pre-trained Language Models

Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/GutachtenBeitrag in KonferenzbandBeigetragenBegutachtung

Beitragende

Abstract

Pretrained language models (LMs) have significantly advanced a variety of semantic tasks and have shown promise as sources of knowledge elicitation. While prior work has studied this ability through probing or prompting, the potential of LMs for large-scale knowledge base construction remains underexplored. The fourth edition of the LM-KBC Challenge invited participants to build knowledge bases directly from LMs, given specific subjects and relations. Unlike existing probing benchmarks, the challenge imposed no simplifying assumptions on relation cardinality-allowing a subject entity to be linked to zero, one, or multiple object entities. To ensure accessibility, the challenge featured a single track based the same LLM to be used by all participants. Five submissions were received, which explored a variety of ideas from self-consistency, self-RAG, reasoning, and prompt optimization.

Details

OriginalspracheEnglisch
TitelKBC-LM Workshop and LM-KBC Challenge at ISWC 2025
Redakteure/-innenSimon Razniewski, Jan-Christoph Kalo, Duygu Islakoğlu, Tuan-Phong Nguyen, Bohui Zhang
Seitenumfang7
PublikationsstatusVeröffentlicht - 2025
Peer-Review-StatusJa

Publikationsreihe

ReiheCEUR Workshop Proceedings
Band4041
ISSN1613-0073

Sonstiges

Titel4th challenge on Knowledge Base Construction from Pre-trained Language Models
KurztitelLM-KBC 2025
Veranstaltungsnummer4
Beschreibungco-located with the 24th International Semantic Web Conference (ISWC 2025)
Dauer2 November 2025
Webseite
OrtNara Prefectural Convention Center
StadtNara
LandJapan

Externe IDs

ORCID /0000-0002-5410-218X/work/194826583

Schlagworte

ASJC Scopus Sachgebiete