ALWars: Combat-Based Evaluation of Active Learning Strategies

Research output: Contribution to book/conference proceedings/anthology/reportConference contributionContributedpeer-review

Contributors

Abstract

The demand for annotated datasets for supervised machine learning (ML) projects is growing rapidly. Annotating a dataset often requires domain experts and is a timely and costly process. A premier method to reduce this overhead drastically is Active Learning (AL). Despite a tremendous potential for annotation cost savings, AL is still not used universally in ML projects. The large number of available AL strategies has significantly risen during the past years leading to an increased demand for thorough evaluations of AL strategies. Existing evaluations show in many cases contradicting results, without clear superior strategies. To help researchers in taming the AL zoo we present ALWars: an interactive system with a rich set of features to compare AL strategies in a novel replay view mode of all AL episodes with many available visualization and metrics. Under the hood we support a rich variety of AL strategies by supporting the API of the powerful AL framework ALiPy [21], amounting to over 25 AL strategies out-of-the-box.

Details

Original languageEnglish
Title of host publicationAdvances in Information Retrieval - 44th European Conference on IR Research, ECIR 2022, Stavanger, Norway, April 10-14, 2022, Proceedings,
EditorsMatthias Hagen, Suzan Verberne, Craig Macdonald, Christin Seifert, Krisztian Balog, Kjetil Nørvåg, Vinay Setty
Pages294-299
Number of pages6
Publication statusPublished - 2022
Peer-reviewedYes

External IDs

Scopus 85128785150
dblp conf/ecir/GonsiorKSTL22
Mendeley 51f8a3bc-7169-32f8-bdbc-32965c2141c4
ORCID /0000-0002-5985-4348/work/162348851

Keywords

Keywords

  • Demo, Active learning, Machine learning, GUI, Python