Learning deep belief networks from non-stationary streams
Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/Gutachten › Beitrag in Konferenzband › Beigetragen › Begutachtung
Beitragende
Abstract
Deep learning has proven to be beneficial for complex tasks such as classifying images. However, this approach has been mostly applied to static datasets. The analysis of non-stationary (e.g., concept drift) streams of data involves specific issues connected with the temporal and changing nature of the data. In this paper, we propose a proof-of-concept method, called Adaptive Deep Belief Networks, of how deep learning can be generalized to learn online from changing streams of data. We do so by exploiting the generative properties of the model to incrementally re-train the Deep Belief Network whenever new data are collected. This approach eliminates the need to store past observations and, therefore, requires only constant memory consumption. Hence, our approach can be valuable for life-long learning from non-stationary data streams.
Details
Originalsprache | Englisch |
---|---|
Titel | Artificial Neural Networks and Machine Learning, ICANN 2012 - 22nd International Conference on Artificial Neural Networks, Proceedings |
Seiten | 379-386 |
Seitenumfang | 8 |
Auflage | PART 2 |
Publikationsstatus | Veröffentlicht - 2012 |
Peer-Review-Status | Ja |
Extern publiziert | Ja |
Publikationsreihe
Reihe | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Nummer | PART 2 |
Band | 7553 LNCS |
ISSN | 0302-9743 |
Konferenz
Titel | 22nd International Conference on Artificial Neural Networks, ICANN 2012 |
---|---|
Dauer | 11 - 14 September 2012 |
Stadt | Lausanne |
Land | Schweiz |
Externe IDs
ORCID | /0000-0001-9430-8433/work/158768042 |
---|
Schlagworte
ASJC Scopus Sachgebiete
Schlagwörter
- Adaptive Deep Belief Networks, Adaptive Learning, Concept drift, Deep Belief Networks, Deep Learning, Generating samples, Generative model, Incremental Learning, Non-stationary Learning