Hi, I'm Jesse!
Doctoral Candidate at the University of California, Davis
Efficient Neural Networks for Real-time Motion Style Transfer
Style is an intrinsic, inescapable part of human motion. It complements the content of motion to convey meaning, mood, and personality. Existing state-of-the-art motion style methods require large quantities of example data and intensive computational resources at runtime. To ensure output quality, such style transfer applications are often run on desktop machine with GPUs and significant memory. In this paper, we present a fast and expressive neural network-based motion style transfer method that generates stylized motion with quality comparable to the state of the art method, but uses much less computational power and a much smaller memory footprint. Our method also allows the output to be adjusted in a latent style space, something not offered in previous approaches. Our style transfer model is implemented using three multi-layered networks: a pose network, a timing network and a foot-contact network. A one-hot style vector serves as an input control knob and determines the stylistic output of these networks. During training, the networks are trained with a large motion capture database containing heterogeneous actions and various styles. Joint information vectors together with one-hot style vectors are extracted from motion data and fed to the networks. Once the network has been trained, the database is no longer needed on the device, thus removing the large memory requirement of previous motion style methods. At runtime, our model takes novel input and allows real-valued numbers to be specified in the style vector, which can be used for interpolation, extrapolation or mixing of styles. With much lower memory and computational requirements, our networks are efficient and fast enough for real-time use on mobile devices. Requiring no information about future states, the style transfer can be performed in an online fashion. We validate our result both quantitatively and perceptually, confirming its effectiveness and improvement over previous approaches.
Paper
LinkPresentation Slides
LinkSupplemental Video
LinkBibtex
@article{ Smith:2019:ENN:3352480.3340254, author = {Smith, Harrison Jesse and Cao, Chen and Neff, Michael and Wang, Yingying}, title = {Efficient Neural Networks for Real-time Motion Style Transfer}, journal = {Proc. ACM Comput. Graph. Interact. Tech.}, issue_date = {July 2019}, volume = {2}, number = {2}, month = jul, year = {2019}, issn = {2577-6193}, pages = {13:1--13:17}, articleno = {13}, numpages = {17}, url = {http://doi.acm.org/10.1145/3340254}, doi = {10.1145/3340254}, acmid = {3340254}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {character animation, deep learning, motion editing, style transfer}, }
Acknowledgements
We would like to acknowledge every at Snap, Inc. who made this work possible. In addition, We would like to thank the authors of [Xia et al. 2015] for generously sharing their data and code with us.