DNA language model GROVER learns sequence context in the human genome
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
Deep-learning models that learn a sense of language on DNA have achieved a high level of performance on genome biological tasks. Genome sequences follow rules similar to natural language but are distinct in the absence of a concept of words. We established byte-pair encoding on the human genome and trained a foundation language model called GROVER (Genome Rules Obtained Via Extracted Representations) with the vocabulary selected via a custom task, next-k-mer prediction. The defined dictionary of tokens in the human genome carries best the information content for GROVER. Analysing learned representations, we observed that trained token embeddings primarily encode information related to frequency, sequence content and length. Some tokens are primarily localized in repeats, whereas the majority widely distribute over the genome. GROVER also learns context and lexical ambiguity. Average trained embeddings of genomic regions relate to functional genomics annotation and thus indicate learning of these structures purely from the contextual relationships of tokens. This highlights the extent of information content encoded by the sequence that can be grasped by GROVER. On fine-tuning tasks addressing genome biology with questions of genome element identification and protein–DNA binding, GROVER exceeds other models’ performance. GROVER learns sequence context, a sense for structure and language rules. Extracting this knowledge can be used to compose a grammar book for the code of life.
Details
Original language | English |
---|---|
Pages (from-to) | 911-923 |
Number of pages | 13 |
Journal | Nature Machine Intelligence |
Volume | 6 |
Issue number | 8 |
Publication status | Published - Aug 2024 |
Peer-reviewed | Yes |
External IDs
Mendeley | 3ef42a69-be20-3695-ab07-b979a14ee368 |
---|