Abstract

Today, videos are the primary way in which information is shared over the Internet. Given the huge popularity of video sharing platforms, it is imperative to make videos engaging for the end-users. Content creators rely on their own experience to create engaging short videos starting from the raw content. Several approaches have been proposed in the past to assist creators in the summarization process. However, it is hard to quantify the effect of these edits on the end-user engagement. Moreover, the availability of video consumption data has opened the possibility to predict the effectiveness of a video before it is published. In this paper, we propose a novel framework to close the feedback loop between automatic video summarization and its data-driven evaluation. Our Closing-The-Loop framework is composed of two main steps that are repeated iteratively. Given an input video, we first generate a set of initial video summaries. Second, we predict the effectiveness of the generated variants based on a data-driven model trained on users' video consumption data. We employ a genetic algorithm to search the space of possible summaries (i.e., adding/removing shots to the video) in an efficient way, where only those variants with the highest predicted performance are allowed to survive and generate new variants in their place. Our results show that the proposed framework can improve the effectiveness of the generated summaries with minimal computation overhead compared to a baseline solution -- 28.3% more video summaries are in the highest effectiveness class than those in the baseline.

Cite

If you find this work useful in your own research, please consider citing:
@inproceedings{xu2020closing,
  title={{Closing-the-Loop}: A Data-Driven Framework for Effective Video Summarization},
  author={Xu, Ran and Wang, Haoliang and Petrangeli, Stefano and Swaminathan, Viswanathan and Bagchi, Saurabh},
  booktitle={Proceedings of the 22nd IEEE International Symposium on Multimedia (ISM)},
  pages={201--205},
  year={2020}
}

Contact

For questions about paper, please contact Ran Xu at martin.xuran@gmail.com