12th International Multi-Media Modelling Conference, Beijing, China, 4 - 06 January 2006, pp.153-160
In this paper, it,e propose a novel strategy at an abstract level bv combining textual and visual clustering results to retrieve images using semantic keyword and auto-annotate images based on similarity with existing keywords. Our main hypothesis is that images that fall in to the same text-cluster can be described with common visual features of those images. In order to implement this hypothesis, we set out to estimate the common visual features in the textually clustered images. When given an un-annotated image, we find the best image match in the different textual clusters by processing their low-level features. Experiments have demonstrated that good accuracy of proposal and its high potential use in annotation of images and for improvement of content based image retrieval.