panelist: Donia Scott Title: Multilingual NLG revisited? Multilingual natural language generation was proposed in the early-1990s as an alternative to Machine Translation. Instead of translating from an input text in one language to an (ideally semantically equivalent and syntactically well-formed) target text in another language, the aim was to map directly from semantics into texts in multiple languages. The advantage of the M-NLG approach over MT was seen to be the improvement in fluency, especially at the levels of discourse and pragmatics, that could be achieved by avoiding biases introduced by the input language. While some notable work in M-NLG continues, the venture has been largely abandoned by the NLG community since the early part of the new century, in part because the limits of scaleability and domain dependence made the method unattractive for real-world applications and in part because of the increasing success in statistical MT methods. Within statistical MT, the dominant focus on the sentence over larger discourse units has, however, done little to address issues of pragmatic congruence and discourse equivalence. Are there new opportunities to address these issues? For example, with the growing popularity of neural MT as a dominant paradigm can we expect to see multilingual equivalents with improved fluency beyond the scope of sentences?