Multimedia Computing and Computer Vision Lab

Login  

Home

     

Courses

     

People

     

Research

     

Publications

     

Student Theses

     

Source Code / Datasets

     

Contact

     

Deep skijump pose

From Multimedia Computing Lab - University of Augsburg

Deep Skijump Pose

Top: Continuously estimated joint trajectories are synchronized with force measurements in an effort to train a deep learning based force prediction network. Bottom: Original joint detections over multiple camera views on the left, rectified poses on the right.


In a joint effort with the Institute of Applied Training Science in Leipzig (Institut für angewandte Trainingswissenschaften, IAT) we develop a training feedback system for improving the jump posture of professional ski jumpers. In this project, we research deep learning algorithms for a continuous athlete pose and ski pose estimation and for tracking the body gravity center of ski jumpers. We use this tracking information to infer kinematic and ballistic flight parameters and to approximate external force sensor measurements, allowing for immediate training feedback with a large set of performance relevant parameters.

This joint project was funded by the Federal Institute for Sports Science (Bundesinstitut für Sportwissenschaft, BISp) based on a resolution of the German Bundestag.

For more information please visit the project page or contact Dan Zecha.

References:

  • Dan Zecha, Christian Eggert, Moritz Einfalt, Stephan Brehm, Rainer Lienhart.
    A Convolutional Sequence to Sequence Model for Multimodal Dynamics Prediction in Ski Jumps.
    First International ACM Workshop on Multimodal Content Analysis in Sports (ACM MMSports'18), part of ACM Multimedia 2018. Seoul, Korea, October 2018. [PDF]
  • Dan Zecha, Moritz Einfalt, Christian Eggert, Rainer Lienhart.
    Kinematic Pose Rectification for Performance Analysis and Retrieval in Sports.
    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2018. Salt Lake City, USA, June 2018. [PDF]