NECPhon 2013

7th Northeast Computational Phonology Meeting

October 26, 2013 - MIT
32-D461 (Stata Center, Dreyfoos Tower, 4th floor)

The Northeast Computational Phonology Circle (NECPhon) is an informal yearly meeting of scholars interested in any aspect of computational phonology. It provides a relaxed atmosphere for researchers to present work in progress on a variety of topics, including learnability, modeling, and computational resources.

All are welcome to attend. If you plan on attending, please send an email to the workshop organizer, Adam Albright, so that we can plan accordingly for refreshments, and we can make sure to keep you informed about future updates. You can see more information about past NECPhon meetings here.

See below for parking and transportation information


Schedule of talks

11:30–12:00Lunch (bagels and coffee will be provided)
12:00–12:30Ezer Rasin (MIT)
An evaluation metric for Optimality Theory
(joint work with Roni Katzir, Tel Aviv University)
12:30–1:00Joe Pater and Robert Staubs (UMass Amherst)
Modeling Learning Trajectories with Batch Gradient Descent
1:00–1:20break
1:20–1:50Jane Chandlee (University of Delaware)
Strictly Local Phonological Processes
1:50–2:20Anthony Brohan (MIT)
A case study in assimilation: The view from PBase
2:20–2:40break
2:40–3:10Naomi Feldman (UMD), Caitlin Richter (UMD), Josh Falk (U Chicago), and Aren Jansen (JHU)
Predicting listeners' perceptual biases using low-level speech features
3:10–3:40Sean Martin (NYU)
Phonetic category learning with unsupervised cue selection
3:40–4:00break
4:00–4:30Tamas Biró (Yale)
Heuristic production, heuristic learning: Boltzmann distribution for strict domination
4:30–5:00Adam Jardine (University of Delaware)
Computationally, tone is different
5:00Organizational meeting

Parking and transportation

The Hayward Lot is the best bet for free parking on weekends by visitors. It is in Kendall Square in Cambridge, bounded by Main St., Hayward St., Ames St., and Carleton St.

Here are two other lots nearby that have free parking on weekends:

For more information about lots around MIT, see the MIT Parking page.

For those coming by T from the south (e.g., South Station): there will be shuttle bus service on the Red Line on Sat-Sun Oct 26–27 from Park St. to Kendall. Just get on the Red Line towards Alewife, and at Park St, get out and follow the signs/directions for the shuttle bus to Kendall.


Selected abstracts

Adam Jardine (University of Delaware): Computationally, tone is different

This presentation shows that unbounded tone plateauing (UTP, Hyman 2011), a common tonal process, is more computationally complex than nearly all segmental processes, supporting the view that tone is substantially different fromsegmental phonology (Clements and Goldsmith 1984, Yip 2002, Hyman 2011). I prove that due to the unbounded, bidirectional nature of UTP, it cannot be described with a subsequential finite-state transducer (although it is still finite-state), which contrasts with typological research arguing that all segmental processes are subsequential (Chandlee and Heinz 2012, Gainor et al. 2012, Heinz and Lai 2013). Intuitively, the conclusion is that the application of segmental processes must be bound in one direction, but tonal processes may be unbounded in both directions. Additionally, this new generalization helps identify potential counterexamples to the hypothesis all segmental processes are subsequential; two are discussed.

Sean Martin (New York University): Phonetic category learning with unsupervised cue selection

Distributional learning is commonly proposed as the mechanism by which human learners acquire novel phonetic categories. However, there are several notable challenges which are only addressed to a limited extent in existing models of category learning. In particular, while the speech signal is extremely high-dimensional, models of distributional learning are generally tested on a hand-picked set of features corresponding to standard assumptions about which cue dimensions listeners attend to in speech perception. The current project explores methods of automatic feature selection which address this issue.

Building on cognitively plausible online learning models such as those proposed by Vallabha et al. (2007) or Toscano & McMurray (2010), I adapt a cue-weighting approach to model unsupervised category learning in which the learner must discover not only categories but learn which cues they must learn over. The learner is given a set of cues which vary in informativeness and learns a set of categories and their relative informativeness. Variations of the model which use these learned cue weights to achieve more accurate modeling of human performance or more accurate category learning with noisy input data are tested and discussed.

Ezer Rasin, MIT: An evaluation metric for Optimality Theory
(joint work with Roni Katzir, Tel Aviv University)

Our goal is to develop an evaluation metric for OT, a criterion for comparing grammars given the data. Using this criterion, the child can try to search through the space of possible grammars, eliminating suboptimal grammars as it proceeds. Our empirical focus is the lexicon and the constraints, and our evaluation metric is based on the principle of Minimum Description Length (MDL). We wish to model aspects of knowledge such as the English-speaking child's knowledge that the first segment in the word 'cat' involves aspiration, that [raiDer] is underlyingly /raiter/, and that [rai:Der] is underlyingly /raider/. We take it that any theory of phonology would require this knowledge to be learned rather than innate, making this a convenient place to start. The learner that we present succeeds in obtaining such knowledge, which, to our knowledge, makes it a first. The generality of the MDL-based evaluation metric allows us to learn additional parts of the grammar without changing our learner. We demonstrate this by learning not just the lexicon and the ranking of the constraints but also the content of the constraints (both markedness and faithfulness constraints) from general constraint schemata. The learner that we present succeeds in obtaining this knowledge, making it a first in this domain as well.