Abstract

Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious labeling. To bypass these challenges, we propose an unsupervised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow between two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empirically, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset.

Document

Paper

Citation

Jason J. Yu, Adam W. Harley, and Konstantinos G. Derpanis. Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness. In Computer Vision - ECCV 2016 Workshops, Part 3, 2016.
Bibtex format:

@inproceedings{
    jjyu2016unsupflow,
    author = {Jason J. Yu and Adam W. Harley and Konstantinos G. Derpanis},
    title = {Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness},
    booktitle = {Computer Vision - ECCV 2016 Workshops, Part 3},
    year = {2016}
}

Code

Code

Extra

Workshop Slides
Poster