LM-KBC 2025: 4th Challenge on Knowledge Base Construction from Pre-trained Language Models
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
Pretrained language models (LMs) have significantly advanced a variety of semantic tasks and have shown promise as sources of knowledge elicitation. While prior work has studied this ability through probing or prompting, the potential of LMs for large-scale knowledge base construction remains underexplored. The fourth edition of the LM-KBC Challenge invited participants to build knowledge bases directly from LMs, given specific subjects and relations. Unlike existing probing benchmarks, the challenge imposed no simplifying assumptions on relation cardinality-allowing a subject entity to be linked to zero, one, or multiple object entities. To ensure accessibility, the challenge featured a single track based the same LLM to be used by all participants. Five submissions were received, which explored a variety of ideas from self-consistency, self-RAG, reasoning, and prompt optimization.
Details
| Originalsprache | Englisch |
|---|---|
| Titel | KBC-LM Workshop and LM-KBC Challenge at ISWC 2025 |
| Redakteure/-innen | Simon Razniewski, Jan-Christoph Kalo, Duygu Islakoğlu, Tuan-Phong Nguyen, Bohui Zhang |
| Seitenumfang | 7 |
| Publikationsstatus | Veröffentlicht - 2025 |
| Peer-Review-Status | Ja |
Publikationsreihe
| Reihe | CEUR Workshop Proceedings |
|---|---|
| Band | 4041 |
| ISSN | 1613-0073 |
Sonstiges
| Titel | 4th challenge on Knowledge Base Construction from Pre-trained Language Models |
|---|---|
| Kurztitel | LM-KBC 2025 |
| Veranstaltungsnummer | 4 |
| Beschreibung | co-located with the 24th International Semantic Web Conference (ISWC 2025) |
| Dauer | 2 November 2025 |
| Webseite | |
| Ort | Nara Prefectural Convention Center |
| Stadt | Nara |
| Land | Japan |
Externe IDs
| ORCID | /0000-0002-5410-218X/work/194826583 |
|---|