Neural machine translating from natural language to SPARQL.
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
SPARQL is a highly powerful query language for an ever-growing number of resources and knowledge graphs represented in the Resource Description Framework (RDF) data format. Using it requires a certain familiarity with the entities in the domain to be queried as well as expertise in the language's syntax and semantics, none of which average human web users can be assumed to possess. To overcome this limitation, automatically translating natural language questions to SPARQL queries has been a vibrant field of research. However, to this date, the vast success of deep learning methods has not yet been fully propagated to this research problem. This paper contributes to filling this gap by evaluating the utilization of eight different Neural Machine Translation (NMT) models for the task of translating from natural language to the structured query language SPARQL. While highlighting the importance of high-quantity and high-quality datasets, the results show a dominance of a Convolutional Neural Network (CNN)-based architecture with a Bilingual Evaluation Understudy (BLEU) score of up to 98 and accuracy of up to 94%.
Details
Original language | English |
---|---|
Pages (from-to) | 510-519 |
Number of pages | 10 |
Journal | Future generation computer systems |
Volume | 117 |
Publication status | Published - Apr 2021 |
Peer-reviewed | Yes |
External IDs
Scopus | 85098985416 |
---|
Keywords
ASJC Scopus subject areas
Keywords
- Learning structured knowledge, Natural language queries, Neural Machine Translation, SPARQL