Learning Narrative Structure

My research centers on discovering narrative structure. Why narrative, and why its structure? Narrative is a universal and ubiquitous element of the human experience. Stories are found in every society, embedded in every culture, and told by nearly every individual. We use stories for diverse purposes, they come in diverse forms, and they have diverse effects: but it is clear that, to fully understand and explain human intelligence, beliefs, and behaviors—and to develop machines that have those same abilities—we will have to understand why narrative is universal and explain the function it serves.

Structure is how we index, classify, frame, and ultimately understand narrative. It runs a gamut of scales and forms: from recurring or archetypal characters, such as Baba Yaga and the Trickster; to plot patterns such as Revenge, Tit-for-Tat, and Silver Lining; to ideological leanings such as Individualism vs. Collectivism or Liberal vs. Conservative. Identifying narrative structure is key to understanding the purpose and effects of narrative, and as such, has long captured the imagination of both layman and scholar alike. It is only now, however, that we are beginning to develop tools and techniques powerful enough to study this phenomenon in a scientifically satisfying manner.


My work addresses at least four key questions:

  1. What are the semantic representations necessary to discover narrative structure?
  2. How can we efficiently gather enough high-quality data in those representations to enable structure identification and analysis?
  3. What new machine learning algorithms are needed to learn the many different types of narrative structure, and how do we then evaluate the structures so learned?
  4. How do we transpose what we have learned about extracting narrative structure to other domains, to enable important new applications?

Successfully answering these questions will have numerous practical and theoretical impacts. Narrative structure can be used to improve natural language processing and understanding, enable concept-based information retrieval over big data, or develop new learning or evaluation techniques in fields as varied as business, medicine, law, or political science. Scientific understanding is also in reach, such as elucidating the relationship between cognition and culture, understanding the science of persuasion and framing, and deepening our understanding of episodic memory and commonsense reasoning.


Against the question of learning narrative structure, I have made significant progress: for the first time ever, I have demonstrated learning an actual theory of narrative structure from real narratives. My learning target was an early and influential theory of narrative structure, viz., Vladimir Propp's morphology of Russian folktales. Propp developed a theory of plot patterns in his seminal work The Morphology of the Folktale, in which he identified a set of plot patterns and their subtypes along with what was essentially a regular grammar for combining them into stories. To learn Propp's theory I developed Analogical Story Merging (ASM), a novel machine learning technique which provides computational purchase on the problem of identifying a set of plot patterns from a given set of stories. ASM is based on the machine learning technique of Bayesian Model Merging and is powerful enough to learn a regular grammar over a set of previously-unknown symbols. The technique is Bayesian, in that it uses a prior and Bayes' rule to guide the search for the optimal model of the data. In ASM the primary calculation of similarity is done via an analogical mapping algorithm, an augmented version of the well-known Structure Mapping Engine. This mapping algorithm assesses the similarity between two events, taking into account aspects of those events such as their structure, semantics, and role assignments. Using ASM, I was able to learn a substantive portion of Propp's morphology; furthermore, ASM is adaptable to learning several different types of narrative structure by varying the input data and form of the prior.

Data & Annotation

Gathering detailed, high-quality, minimally-noisy semantics of natural language data is difficult, which is why I developed the Story Workbench, a new tool for semi-automatic annotation of these different layers of meaning. This tool is a comprehensive solution to a long-standing problem: how to collect high-quality machine-assisted human annotations of text quickly and with low cost, while maintaining generality and extensibility of the tool.

Using the the Story Workbench, I have assembled the largest, most deeply-annotated narrative corpus to date approximately 19,000 words of Russian folktales, double-annotated and adjudicated for nearly twenty different layers of annotation. With additional funding from DARPA, I am assembling an even larger corpus containing over 160,000 words, all deeply semantically annotated.