Uphill Battles in Language Processing
Scaling Early Achievements to Robust Methods
Call for Poster Abstracts
The workshop on Uphill Battles in Natural Language Processing will feature a poster session, for which we invite submissions in the form of research summaries (up to 2 pages). Accepted research summaries will be included in the workshop proceedings, with up-to-two extra pages of content. The deadline for submission is the 18th of August 2016, and authors of accepted submissions will be notified on September 8th 2016.
We are planning to partially fund some students and postdoctoral researchers for travel and registration expenses in connection with the workshop. We expect to involve these selected students and postdoctoral researchers in some organization activities on the day of the workshop itself.
Topics of interest and guidelines for writing your research summary:
Our workshop seeks to revive a discussion on uphill battles in natural language processing -- in particular, within the four topic areas:
- Dialogue and Speech
- Natural Language Generation
- Document Understanding
- Grounded Language
We invite 2-page research summaries which present work on these topics -- in particular, research which focuses on deeper problems which still baffle NLP systems, and/or research which seeks to introduce new tasks, definitions and techniques. While work making incremental progress in the context of well-established tasks is generally of value, it is not the focus of this workshop. Nevertheless, we are happy to include work-in-progress, as well as analyses of negative findings. We are particularly seeking work which engages with the workshop topics and goals, and will stimulate discussions among workshop participants.
At the end of this call, you can find a list of early goals which colleagues suggested to us. You may use these goals in conjunction with the four topic areas as an indicator of the lines of work we are interested in.
When writing the summary please make sure you address the following points.
- How does your work engage with one or more of the workshop topics? (Identify which topics)
- Which challenges are you seeking to address?
- What is your approach and what have you done so far?
- How is the work being evaluated, or how do you plan to evaluate it?
- Are you a student or postdoctoral researcher in need of funding to attend the workshop? If you can secure funding from other sources for attending the conference, please let us know, so we can identify students in need of travel support.
Each summary will be reviewed by at least 2 program committee members and evaluated along the following dimensions.
- Engagement with one or more topics of the workshop
- Originality
- Expected impact
- Evaluation and/or evaluation plan
The summaries will be published in the proceedings of the workshop (unless authors indicate otherwise).
Submission guidelines:
Please submit your summary here
Summaries may describe collaborative work. However, students and postdoctoral researchers who are applying for travel funds should be the first author of their papers.
The length of the summary should be maximum 2 pages excluding references. Each submission should follow the EMNLP 2016 formatting instructions. Submission templates can be downloaded from http://www.emnlp2016.net/submissions.html
The reviewing will be blind, papers should not include authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g., We previously showed (Smith, 1991) ..., should be avoided. Instead, use citations such as Smith (1991) previously showed .... Submissions that do not conform to these requirements will be rejected without review. Separate author identification information is required as part of the online submission process.
Workshop participation:
The authors of accepted research summaries will be invited to present posters at the workshop. Other sessions of the workshop will feature talks and discussions led by established researchers as well as younger scientists and we are hopeful that students current and future work will benefit greatly from a dialog between different groups.
Early Goals that Remain Uphill Battles
We gratefully acknowledge those researchers who provided input at the early stages of this workshops planning to compile the following list of uphill battles.
- Real-time modelling of the flow of information in text, sentence-by-sentence, capturing both meaning (what's being said) and function (why it's being said): Specifically, what entities are the focus of attention, what's being said about them and why, and how does focus of attention shift -- abruptly (in response to some kind of signal) or gradually?
- Extending sentence-level inference to text-level inference, reducing the burden of inference, through the text constraining the inferences that are relevant, while at the same time, inferences are incorporated into the meaning and/or function of the text.
- Developing a usable semantics of linguistic content for practical applications such as information retrieval, machine translation, summarization and/or question answering that is, a semantics that is simultaneously robust across the different expressions that can realise it and compatible with logical operators such as negation.
- Acquiring sufficient high-quality annotation to make tools such as parsers more robust, and portable to new domains. The challenge here is that the most successful natural language tools are those that are supervised. But Zipf's law means that we need exponentially growing amounts of labeled data to train them.
- Identifying information that is not asserted or entailed by a text, but which any competent speaker would nonetheless infer. For example, a competent speaker would infer from Hobbs 1990 example A jogger was hit by a car in Fresno last night that this happened while the victim was jogging. (Notably, a competent speaker would not make the analogous inference with if the subject were "a farmer").
- Re-embracing knowledge, plans and plan recognition in work on dialogue and dialogue systems, rather than continuing to limit ourselves to simple statistical state-based approaches and/or dialogue systems that exhibit only very controlled interaction types in closed domains. NLP researchers have shied away from this problem due to the knowledge engineering bottleneck. But the ability to recognize plans, goals, and motivations would not only support deeper language understanding and inference, it would also tie directly into affective states (e.g., recognizing plan failure would imply a negative state, while plan success, a positive state).
- Creating new resources, moving beyond the Wall Street Journal corpus in discourse, beyond form-filling systems in dialogue, and beyond Geo-Query in question-answering. Given the significant effort required to create new resources, available ones have driven the framing of interesting research problems (rather than the other way around), and solutions risk overfitting to those resources.
- Letting go of intrinsic evaluation as a driver of research choices, since it leads to carving things up into small isolated problems that are amenable to such evaluation. While progress has been made through this commitment to intrinsic evaluation, we should be addressing goals where progress requires assessment by more varied types of evaluation measures.
- Re-focussing on language in domains which themselves have a rich semantics. Such domains were de rigeur in early work in NLP, but we lacked methods, tools or capacity needed to deal with them. The challenge of such domains is to represent their semantics in a way that the priors on utterances and utterance meaning can be computed naturally and correctly, and used to understand both what was said and what may have been left unsaid.
- While we have some coarse semantics for entities, objects and events in the form of semantic hierarchies (e.g., WordNet), distributional methods for semantic similarity, semantic class learning methods, etc., we need richer semantic representations that can still be populated automatically (or semi-automatically) through advances in information extraction. The lack of such representations may be one of the big reasons why semantic info has shown limited benefits for coreference resolution and other applications.
- Re-visiting the intentions underlying language use. While early approaches to NL understanding and generation both tried to incorporate intentions, to reflect the pragmatics of language use, it turned out to be too complex and unmanageable. But we can only scratch the surface of problems of recognizing and understanding the opinions and sentiments underlying language use without considering speaker intentions. On the other hand, we cannot deal with speaker intentions without language data relevant to modelling them, and the only richly annotated resource -- the Penn TreeBank -- provides few examples of language expressing sentiment, persuasion or argument, and we need such resources as a basis for modelling the pragmatic inferences that people make and are intended to make when exposed to such language.
- Capturing how language varies with domain. Extensive effort here has only led to marginal improvements. While this may partly be because the problem is ill-posed, given that "domain" may lump too many things together, including genre, the problem remains one (or more) in need of solution.
- Dealing with morphologically rich languages, or more generally, languages that are typologically different from mainstream European languages. In both parsing and machine translation, most attempts to date have delivered only marginal improvement: New ideas and better methods are needed for dealing with these languages.
- Natural language generation from rich semantic representations, both with respect to individual propositions and with respect to relations between propositions, in order to support applications such as paraphrasing the results of semantic parsing for clarification or explanation of the results of data analytics.
- In text-to-speech synthesis (TTS), early research tried to but failed to address two problems that persist to today and that have become more important in the context of systems such as Siri and Cortana: (1) assigning human-like prosody to input text and (2) generating natural-sounding emotional speech without simply recording hours of someone trying to sound angry or happy, etc. Both are real research challenges.
- In dialogue systems, early research tried to support clarification in dialogue. Today there is an even greater need for clarification in spoken dialogues, where the input is often more errorful than in text. This is related to the issue of NLG from rich semantic representations, where there is a need to paraphrase from such representations in order to confirm correctness of the output of semantic parsing.
- Using situational awareness (eg "pragmatics") to understand language in a real world (not simulated) context;
- Effective modeling of an agent's internal knowledge: What an agent knows and doesn't know, what an agent knows that s/he knows vs. what s/he doesn't know that s/he knows, etc.
- Understanding context and a user's desires and goals, in such a way that a dialogue system knows when to intervene vs. the value of staying silent and saying nothing.