DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

  • Eldar Insafutdinov - , Max Planck Institute for Informatics (Author)
  • Leonid Pishchulin - , Max Planck Institute for Informatics (Author)
  • Bjoern Andres - , Max Planck Institute for Informatics (Author)
  • Mykhaylo Andriluka - , Max Planck Institute for Informatics, Stanford University (Author)
  • Bernt Schiele - , Max Planck Institute for Informatics (Author)

Abstract

The goal of this paper is to advance the state-of-the-art of articulated pose estimation in scenes with multiple people. To that end we contribute on three fronts. We propose (1) improved body part detectors that generate effective bottom-up proposals for body parts; (2) novel image-conditioned pairwise terms that allow to assemble the proposals into a variable number of consistent body part configurations; and (3) an incremental optimization strategy that explores the search space more efficiently thus leading both to better performance and significant speed-up factors. Evaluation is done on two single-person and two multi-person pose estimation benchmarks. The proposed approach significantly outperforms best known multi-person pose estimation results while demonstrating competitive performance on the task of single person pose estimation (Models and code available at http://pose.mpi-inf.mpg.de).

Details

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2016
EditorsBastian Leibe, Jiri Matas, Nicu Sebe, Max Welling
PublisherSpringer, Cham
Pages34–50
ISBN (electronic)978-3-319-46466-4
ISBN (print)978-3-319-46465-7
Publication statusPublished - 2016
Peer-reviewedYes
Externally publishedYes

Publication series

SeriesLecture Notes in Computer Science
Volume9910
ISSN0302-9743

External IDs

Scopus 84990033515
ORCID /0000-0001-5036-9162/work/143781900