COIN: COmmonsense INference in Natural Language Processing

Workshop to be held in conjunction with EMNLP-IJCNLP in Hong Kong



Research in natural language understanding and textual inference has advanced considerably in recent years, resulting in powerful models that are able to read and understand texts, even outperforming humans in some cases. However, it remains challenging to answer questions that go beyond the texts themselves, requiring the use of additional commonsense knowledge. Previous work has explored using both explicit representations of background knowledge (e.g., ConceptNet or NELL), and latent representations that capture some aspects of commonsense (e.g., OpenAI GPT). These and any other methods for representing and using commonsense in NLP are of interest to this workshop.

The COIN workshop aims at bringing together researchers that are interested in modeling commonsense knowledge, developing computational models thereof, and applying commonsense inference methods in NLP tasks. We are interested in any type of commonsense knowledge representation, and explicitly encourage work that makes use of knowledge bases and approaches developed to mine or learn commonsense from other sources. The workshop is also open for evaluation proposals that explore new ways of evaluating methods of commonsense inference, going beyond established natural language processing tasks.

Shared task

As part of the workshop, we plan to organize a shared task on Machine Comprehension using Commonsense Knowledge. In contrast to other machine comprehension tasks (and workshops), our focus will be on inferences over commonsense knowledge about events and participants as required for text understanding. We plan to offer two tracks: one on everyday scenarios and one on news events. The first track is based on a revised and extended version of SemEval 2018 Task 18 (Ostermann et al., 2018). The other track is based on the recent ReCoRD dataset (Zhang et al., 2018). Both tracks focus on questions about events or participants of the story that are not explicitly mentioned in the text, thus explicitly requiring task participants to come up with approaches that use the narrative context and commonsense knowledge to draw suitable inferences. The data for the shared task is currently in the process of creation and will be available shortly.

We invite both long (8 pages) and short (4 page) papers. The limits refer to the content and any number of additional pages for references are allowed. The papers should follow the EMNLP-IJCNLP 2019 formatting instructions (TBA).

Each submission must be anonymized, written in English, and contain a title and abstract. We especially welcome the following types of papers:

Please submit your papers at http://www.softconf.com/TODO/

All deadlines refer to 11:59pm Pacific Daylight Savings Time (UTC/GMT -7 hours).

Program Committee