|Address|||||77 Massachusetts Ave.
Cambridge, MA 02139
This is a collection of software I've either written or cobbled together from existing resources. Credit is due to the original authors listed, and any bugs are due to me - I can't guarantee results you may get using these scripts, but I hope they will be useful to others doing similar experiments.
This is a set of Matlab scripts that calibrates the eyetracker and runs a simple Psychtoolbox experiment from within Matlab. It requires two Matlab packages - t2t and Psychtoolbox. The calibrator is directly adapted from one created by Fani Deligianni; I've updated it to work with the current (early 2012) version of Talk2Tobii and modified it for developmental experiments (prompted starts for each point and an option to play an attention-getter instead.) The experiment itself is just a very short preferential looking paradigm.
Luca Filippin at the T2T Google Group was very helpful as I was getting started using the T2T package. Many thanks also to Celeste Kidd and Johnny Wen for sharing an example of an eyetracker/Psychtoolbox experiment - I've borrowed some functional/organizational scripts (mostly parameter handling) from them. Celeste is also responsible for the version of the steel drum jam that plays during the calibrator, and Katherine White is the creator of babylaugh.mov
This is a simple video memory task for an online experiment using the excellent Willow Python package by Jaap Weel and Kevin McCabe. It uses html5 video encoding with some fallback for browsers that use older html standards (like Internet Explorer.) Note that there are three versions of every video in this code, for different browsers. I make my videos in iMovie and export as *.mov files; there are plenty of free format converters online (I use Miro) to create the new versions- but always check your output files!
Probably there are more elegant video solutions, but Mechanical Turk feedback suggests that this code is working for at least 90% of users. The biggest problem is laggy videos that don't display to the end - as a sanity check I include a fixation cross at the end of all my test videos and ask subjects to report whether they saw it. The Willow project page contains an absolutely fantastic tutorial, and Willow itself has much more power than I'm using - such as the ability to have two participants interact with each other.
The only other thing that I would mention about Willow is that I have found that scripts often run more smoothly from our Linux server than from local files on my PC - so if a movie isn't playing back or a picture isn't displaying, test it from your server before spending lots of time debugging locally.
(Note, please get in touch if you might be interested in using stimuli that appear in this example; some are mine and some are due to Paul Muentener & Laura Lakusta.)
This is a fairly simple R script that just takes a single CHILDES-formatted corpus file and turns it into an R dataframe with all corpus information (e.g. child, age, date) listed on each utterance. From there it can be written out as a CSV or other standard file format.
In addition to copying any tiers that it finds into columns of the dataframe, it also creates a gloss of the utterance that removes corpus notation, leaving an unannotated version that is suitable for displaying to experimental participants or conference audiences :) In the example script I've also done some additional processing that deletes ending punctuation and various whitespace anomalies in order to find even more sentences with matched gloss and %mor lines.
Be wary of the output you get with this script, especially if you wish to use tiers other than gloss and mor. If you are not already, become familiar with text encoding, control characters, and csv formats to understand what might be happening if a line looks funny. Especially watch out if there are commas or quotation marks anywhere in your corpus file.
In the Brown (1973) corpus this finds matching lengths for approximately 97% of adult sentences and 95% of child sentences- most of the exceptions are repairs (He has, she has a doll) and sentence fragments. The script is pretty slow (10 minutes or so to read the entire Eve corpus on my machine) but you should only need to run it once per corpus.
Here also are two additional Python files that may make the above more useful. They were written for the Brown (1973) corpus as well. The first corrects for some of the text encoding anomalies, and the second simply gives you a set of smaller csv slices that are somewhat less likely to crash your spreadsheet program.