Understanding and classifying image tweets

Abstract

Social media platforms now allow users to share images alongside their textual posts. These image tweets make up a fast-growing percentage of tweets, but have not been studied in depth unlike their text-only counterparts. We study a large corpus of image tweets in order to uncover what people post about and the correlation between the tweet’s image and its text. We show that an important functional distinction is between visually-relevant and visually-irrelevant tweets, and that we can successfully build an automated classifier utilizing text, image and social context features to distinguish these two classes, obtaining a macro F1 of 70.5%.

Publication
Proceedings of the 21st ACM International Conference on Multimedia
Dongyuan Lu
Postdoctoral Alumnus

WING alumni; former postdoc

Min-Yen Kan
Min-Yen Kan
Associate Professor

WING lead; interests include Digital Libraries, Information Retrieval and Natural Language Processing.