ELCo: Bridging Emoji Mashup and Lexical Composition

Bridge at Night on Google Android 12L

With the advent of Emojis in online text messages, large efforts have been made to build Emoji Mashup System (EMS). EMS takes two separate emojis as input and generates a new one that combines the two. Existing EMSs only leverage on the visual information and forgo rich semantic information in the text.

We study a novel problem of representing a concept by composing a sequence of Emojis. It is a challenging problem because the Emoji compositions should uncover implicit and nonliteral meaning in the concept. We first overcome the data scarcity by customizing Unicode ZWJ dataset and creating our own ELCo dataset (1153 annotations for 210 Adjective Noun Compounds). We then benchmark this task under the generation setting and find it challenging for a state-of-the-art system. Hence we re-formalize it under a simplistic ranking setting in order to evaluate the intrinsic property of our task. At this point, we have made the following discoveries: (1) Pretrained Language Model (PLM) is good at distinguishing ground truth against irrelevant, but weak at distinguishing ground truth against plausible samples; (2) We identified subcategories that are easy or difficult for PLM, we provide a post-hoc explanation by annotation statistics; (3) We make initial attempts by informing the implicit semantics to the model, we discover different performances w.r.t. different approaches of informing.

Project Name:
ELCo🌉: Bridging Emoji Mashup 🐻‍❄️ and Lexical Composition 🔤