Extracting Operator Trees from Model Embeddings
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
Transformer-based language models are able to capture several linguistic properties such as hierarchical structures like dependency or constituency trees. Whether similar structures for mathematics are extractable from language models has not yet been explored. This work aims to probe current state-of-the-art models for the extractability of Operator Trees from their contextualized embeddings using the structure probe designed by (Hewitt and Manning, 2019). We release the code and our data set for future analyses.
Details
| Original language | English |
|---|---|
| Title of host publication | MathNLP 2022 - 1st Workshop on Mathematical Natural Language Processing, Proceedings of the Workshop |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 40-50 |
| Number of pages | 11 |
| ISBN (electronic) | 9781959429142 |
| Publication status | Published - 2022 |
| Peer-reviewed | Yes |
Workshop
| Title | 1st Workshop on Mathematical Natural Language Processing |
|---|---|
| Abbreviated title | MathNLP 2022 |
| Conference number | 1 |
| Duration | 8 December 2022 |
| Website | |
| Location | Abu Dhabi National Exhibition Centre & Online |
| City | Abu Dhabi |
| Country | United Arab Emirates |
External IDs
| ORCID | /0000-0001-8107-2775/work/197963764 |
|---|