COIN: COmmonsense INference in Natural Language Processing



Research in natural language understanding and textual inference has advanced considerably in recent years, resulting in powerful models that are able to read and understand texts, even outperforming humans in some cases. However, it remains challenging to answer questions that go beyond the texts themselves, requiring the use of additional commonsense knowledge. Previous work has explored using both explicit representations of background knowledge (e.g., ConceptNet or NELL), and latent representations that capture some aspects of commonsense (e.g., language models, cf. Radford et al., OpenAI preprint 2018). These and any other methods for representing and using commonsense in NLP are of interest to this workshop.

The proposed workshop aims at bringing together researchers that are interested in modeling commonsense knowledge, developing computational models thereof, and applying commonsense inference methods in NLP tasks. We are interested in any type of commonsense knowledge representation, and explicitly encourage work that makes use of knowledge bases and approaches developed to mine or learn commonsense from other sources. The workshop is also open for evaluation proposals that explore new ways of evaluating methods of commonsense inference, going beyond established natural language processing tasks.

Shared task

As part of the workshop, we plan to organize a shared task on Machine Comprehension using Commonsense Knowledge. In contrast to other machine comprehension tasks (and workshops), our focus will be on inferences over commonsense knowledge about events and participants as required for text understanding. We plan to offer up to two tracks: one on everyday scenarios and one on news events. The first track will be based on a revised and extended version of SemEval 2018 Task 18 (Ostermann et al., 2018), which SO and MR co-organized. The other track will be based on news texts. Both tracks will focus on questions about events or participants of the story that are not explicitly mentioned in the text, thus explicitly requiring task participants to come up with approaches that use the narrative context and commonsense knowledge to draw suitable inferences. The data for the shared task is currently in the process of creation and will be completed by the end of 2018.

We invite both long (8 pages) and short (4 page) papers. The limits refer to the content and any number of additional pages for references are allowed. The papers should follow the ACL/EMNLP 2019 formatting instructions (TBC).

Each submission must be anonymized, written in English, and contain a title and abstract. We especially welcome the following types of papers:

Please submit your papers at http://www.softconf.com/TODO/

Program Committee