Multimedia Computing and Computer Vision Lab












Student Theses


Source Code / Datasets




Deep sports pose

From Multimedia Computing Lab - University of Augsburg

Deep Sports Pose

Top: automatically estimated poses of a swimmer during the start phase (left) and a long jump athlete (right). Bottom: predicted long jump phases based on pose estimates.

Video recordings of athletes are an important tool in many sport types, including swimming and long/triple jump, to evaluate performance and assess possible improvements. For a quantitative evaluation the video material often has to be annotated manually, leading to a vast workload overhead. This limits such an analysis to top-tier athletes only. In this joint project with the Olympic Training Centers (OSPs) Hamburg/Schleswig-Holstein and Hessen we research deep neural network based human pose estimation and video event detection that can be applied to various sport types and environments. We evaluate our research using the very different examples of start phases in swimming and long/triple jump recordings. Our main focus lies on time-continuous predictions and the fusion of multiple synchronous camera streams. The goal of the project is to provide a reliable and automatic pose and event detection system that makes quantitative performance evaluation accessible to more athletes more frequently.

This joint project is funded by the Federal Institute for Sports Science (Bundesinstitut für Sportwissenschaft, BISp) based on a resolution of the German Bundestag, starting January 2017.

For more information please contact Moritz Einfalt.


  • Moritz Einfalt, Dan Zecha, Rainer Lienhart.
    Activity-conditioned continuous human pose estimation for performance analysis of athletes using the example of swimming.
    IEEE Winter Conference on Applications of Computer Vision 2018 (WACV18), Lake Tahoe, NV, USA, March 2018. [arXiv][PDF]
  • Rainer Lienhart, Moritz Einfalt, Dan Zecha. Mining Automatically Estimated Poses from Video Recordings of Top Athletes. IJCSS, Dec. 2018. [arXiv]