Syntactic vs Semantic Linear Abstraction and Refinement of Neural Networks
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
Abstraction is a key verification technique to improve scalability. However, its use for neural networks is so far extremely limited. Previous approaches for abstracting classification networks replace several neurons with one of them that is similar enough. We can classify the similarity as defined either syntactically (using quantities on the connections between neurons) or semantically (on the activation values of neurons for various inputs). Unfortunately, the previous approaches only achieve moderate reductions, when implemented at all. In this work, we provide a more flexible framework, where a neuron can be replaced with a linear combination of other neurons, improving the reduction. We apply this approach both on syntactic and semantic abstractions, and implement and evaluate them experimentally. Further, we introduce a refinement method for our abstractions, allowing for finding a better balance between reduction and precision.
Details
Original language | English |
---|---|
Title of host publication | Automated Technology for Verification and Analysis |
Editors | Étienne André, Jun Sun |
Publisher | Springer, Cham |
Pages | 401-421 |
Number of pages | 21 |
ISBN (electronic) | 978-3-031-45329-8 |
ISBN (print) | 978-3-031-45328-1 |
Publication status | Published - 2023 |
Peer-reviewed | Yes |
Publication series
Series | Lecture Notes in Computer Science, Volume 14215 |
---|---|
ISSN | 0302-9743 |
External IDs
Scopus | 85175948369 |
---|---|
ORCID | /0000-0002-3437-0240/work/165454710 |