Panelist: Christopher Manning A (maybe not yet) unified theory of inference for text understanding In the 1980s, people in NLU had lofty goals but modest realities; today we have much better realities, but unfortunately are mainly failing to enunciate lofty goals. If we contrast the textual inference problems focused on 30 years ago with where we are today, on some of the problems we have made real progress, whereas others, such as elaboration (connecting predicates via reasons or causes) have tended to be neglected. We are also at an interesting scientifc juncture: To what extent do we need explicit language and knowledge representations versus finding that the distributed representations of a bidirectional LSTM with attention seem to be able to do everything better? Whichever way that plays out, I think there are still clear things we need to work more on: less primitive memories and knowledge, developing goals and plans, understanding inter-sentential relationships, and doing elaborations from a situation using common sense knowledge.