TY - GEN
T1 - User Assisted Clustering Based Key Frame Extraction
AU - Shetty, Nisha P.
AU - Garg, Tushar
N1 - Publisher Copyright:
© 2020, Springer Nature Singapore Pte Ltd.
PY - 2020
Y1 - 2020
N2 - Our study proposes a novel method of key frame extraction, useful for video data. Video summarization indicates condensing the amount of data that must be examined to retrieve any noteworthy information from the video. Video summarization [1] proves to be a challenging problem as the content of video varies significantly from each other. Further significant human labor is required to manually summarize video. To tackle this issue, this paper proposes an algorithm that summarizes video without prior knowledge. Video summarization is not only useful in saving time but might represent some features which may not be caught by a human at first sight. A significant difficulty is the lack of a pre-defined dataset as well as a metric to evaluate the performance of a given algorithm. We propose a modified version of the harvesting representative frames of a video sequence for abstraction. The concept is to quantitatively measure the difference between successive frames by computing the respective statistics including mean, variation and multiple standard deviations. Then only those frames are considered that are above a predefined threshold of standard deviation. The proposed methodology is further enhanced by making it user interactive, so a user will enter the keyword about the type of frames he desires. Based on input keyword, frames are extracted from the Google Search API and compared with video frames to get desired frames.
AB - Our study proposes a novel method of key frame extraction, useful for video data. Video summarization indicates condensing the amount of data that must be examined to retrieve any noteworthy information from the video. Video summarization [1] proves to be a challenging problem as the content of video varies significantly from each other. Further significant human labor is required to manually summarize video. To tackle this issue, this paper proposes an algorithm that summarizes video without prior knowledge. Video summarization is not only useful in saving time but might represent some features which may not be caught by a human at first sight. A significant difficulty is the lack of a pre-defined dataset as well as a metric to evaluate the performance of a given algorithm. We propose a modified version of the harvesting representative frames of a video sequence for abstraction. The concept is to quantitatively measure the difference between successive frames by computing the respective statistics including mean, variation and multiple standard deviations. Then only those frames are considered that are above a predefined threshold of standard deviation. The proposed methodology is further enhanced by making it user interactive, so a user will enter the keyword about the type of frames he desires. Based on input keyword, frames are extracted from the Google Search API and compared with video frames to get desired frames.
UR - http://www.scopus.com/inward/record.url?scp=85089219361&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089219361&partnerID=8YFLogxK
U2 - 10.1007/978-981-15-6634-9_5
DO - 10.1007/978-981-15-6634-9_5
M3 - Conference contribution
AN - SCOPUS:85089219361
SN - 9789811566332
T3 - Communications in Computer and Information Science
SP - 46
EP - 55
BT - Advances in Computing and Data Sciences - 4th International Conference, ICACDS 2020, Revised Selected Papers
A2 - Singh, Mayank
A2 - Gupta, P.K.
A2 - Tyagi, Vipin
A2 - Flusser, Jan
A2 - Ören, Tuncer
A2 - Valentino, Gianluca
PB - Springer Gabler
T2 - 4th International Conference on Advances in Computing and Data Sciences, ICACDS 2020
Y2 - 24 April 2020 through 25 April 2020
ER -