Our study proposes a novel method of key frame extraction, useful for video data. Video summarization indicates condensing the amount of data that must be examined to retrieve any noteworthy information from the video. Video summarization  proves to be a challenging problem as the content of video varies significantly from each other. Further significant human labor is required to manually summarize video. To tackle this issue, this paper proposes an algorithm that summarizes video without prior knowledge. Video summarization is not only useful in saving time but might represent some features which may not be caught by a human at first sight. A significant difficulty is the lack of a pre-defined dataset as well as a metric to evaluate the performance of a given algorithm. We propose a modified version of the harvesting representative frames of a video sequence for abstraction. The concept is to quantitatively measure the difference between successive frames by computing the respective statistics including mean, variation and multiple standard deviations. Then only those frames are considered that are above a predefined threshold of standard deviation. The proposed methodology is further enhanced by making it user interactive, so a user will enter the keyword about the type of frames he desires. Based on input keyword, frames are extracted from the Google Search API and compared with video frames to get desired frames.