A core region captioning framework for automatic video understanding in story video contents
A core region captioning framework for automatic video understanding in story video contents
Blog Article
Due to the rapid increase in images and image data, research examining the visual analysis of such unstructured data has recently come to be actively conducted.One of the representative image caption models the DenseCap model extracts various regions in an image and generates region-level captions.However, since the existing DenseCap model does not consider priority for region captions, it is difficult to identify relatively significant region captions that best describe the image.There has also been a lack of research into captioning focusing on the core areas for story content, such as images 2014 dodge ram 1500 fender flares in movies and dramas.
In this study, we propose a new image captioning framework based on DenseCap that aims to lochby venture pouch promote the understanding of movies in particular.In addition, we design and implement a module for identifying characters so that the character information can be used in caption detection and caption improvement in core areas.We also propose a core area caption detection algorithm that considers the variables affecting the area caption importance.Finally, a performance evaluation is conducted to determine the accuracy of the character identification module, and the effectiveness of the proposed algorithm is demonstrated by visually comparing it with the existing DenseCap model.