Scripts (Schank and Abelson, 1977) are fundamental pieces of common sense knowledge that describe how certain scenarios (like going shopping) are prototypically represented in our minds. They consist of different events in a certain order (write shopping list - go to supermarket - enter supermarket...), the people and objects involved in the scenario (customer, cashier, goods, money, ...) and conditional connections between all elements (you need pen and paper to write the list, you need to pay before you leave the supermarket, ...).
Some scripts - of a certain level of complexity - can be learned from texts (Chambers and Jurafsky, 2008 & 2009). However, many things (like how to go shopping) are shared implicit knowledge, thus there is no need to elaborate them much in texts. A story which describes every step of a shopping tour in detail, including entering and exiting every shop, would even be boring and strange to read. The reason for this is that humans normally use scripts to make their language more efficient.
As an example, imagine the following dialog:
A: "I'm going to the supermarket."No need to explain B's question any further - he or she can safely assume that A shares the same knowledge about shopping that he has - both have the same script knowledge.
B: "Do you have money?"
We believe that asking a lot of people directly how they experience certain scenarios can help to acquire the missing pieces of script knowledge. This kind of crowdsourcing (cf. also Snow et al, 2008) for knowledge comes with multiple merits for the script mining problem:
- We get multiple descriptions for the same event. Those "event paraphrases" (like choose products, decide what you want) are a valuable source in itself, and they ground our script representations in natural language.
- We get different versions of a script, showing variability of sequential constraints or participants. For example, we can learn that you can walk or drive to a supermarket, and that you can pay either with a credit card or in cash terms. Within the same scenario, we can learn that it doesn't matter whether you get your shopping cart before or after you enter the supermarket, but that it's essential to pay before you leave.
- We can get access to arbitrary scripts that are known to humans, and we can collect them either globally, or from specific cultural groups, or from specific age groups, or with any other restriction we are interested in.
Our final goal is to provide a big set of script representations, including sequential ordering constraints, information on participants and conditions of events.
Together with Bernt Schiele and Marcus Rohrbach from Max Planck Institute for Informatics, we're working on an interdisciplinary application of our data: we'll use our script knowledge to identify and classify events in video streams.
- Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-HLT 2008.
- Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of ACL-IJCNLP 2009.
- Roger C. Schank and Robert P. Abelson. 1977. Scripts, Plans, Goals and Understanding. Lawrence Erlbaum, Hillsdale, NJ.
- Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of EMNLP 2008.