New publication — EARSHOT model of human speech recognition

This brief report has been YEARS in the making. Largest team on any  publication from our lab? Congrats especially to Heejo, Sahil & Monica!

Magnuson, J.S., You, H., Luthra, S., Li, M., Nam, H., Escabí, M., Brown, K., Allopenna, P.D., Theodore, R.M., Monto, N., & Rueckl, J.G. (2020). EARSHOT: A minimal neural network model of incremental human speech recognition. Cognitive Science, 44, e12823. http://dx.doi.org/10.1111/cogs.12823 [PDF] [Supplementary Materials]

3 lab presentations / proceedings publications at CogSci2019

We had 3 presentations/papers at CogSci2019.

  1. Magnuson, J.S., Li, M., Luthra, S., You, H., & Steiner, R. (2019). Does predictive processing imply predictive coding in models of spoken word recognition? In A.K. Goel, C.M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (pp. 735-740). Montreal, QB: Cognitive Science Society. [PDF]
  2. Magnuson, J.S., You, H., Rueckl, J. R., Allopenna, P. D., Li, M., Luthra, S., Steiner, R., Nam, H., Escabi, M., Brown, K., Theodore, R., & Monto, N. (2019). EARSHOT: A minimal network model of human speech recognition that operates on real speech.  In A.K. Goel, C.M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (pp. 2248-2253). Montreal, QB: Cognitive Science Society. [PDF]
  3. McClelland, J.L., McRae, K., Borovsky, A., Kuperberg, G., & Hill, F. (2019). Symposium in memory of Jeff Elman: Language learning, prediction, and temporal dynamics. In A.K. Goel, C.M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (pp. 33-34). Montreal, QB: Cognitive Science Society. [PDF]

New paper from Monica Li et al.

After years of dedicated work, Monica Li (with support from her co-authors) has published a terrific new paper in the Journal of Memory & Language:

Li, M.Y.C., Braze, D., Kukona, A., Johns, C.L., Tabor, W., Van Dyke, J. A., Mencl, W.E., Shankweiler, D.P., Pugh, K.R., & Magnuson, J.S. (2019). Individual differences in subphonemic sensitivity and phonological skills. Journal of Memory & Language, 107, 195-215. https://doi.org/10.1016/j.jml.2019.03.008 (links at publications page)

In addition to an epic set of experiments and individual differences measures (and implications for whether phonological processing is unusually precise or imprecise in individuals with lower reading ability), Monica provides a direct comparison between growth curve analysis (GCA) and generalized additive models (GAMs).

Congrats, Monica!

Do X in Y: Install lens in Ubuntu linux

Doug Rohde’s lens (light, efficient, neural simulator) is an awesome tool. However, given that it has not been actively maintained since 2000, its shelf-life is probably limited. I still have some legacy projects that were developed in lens (mainly using SRNs) and like to be able to re-run and tweak them. At some point, I’ll move them to tensorflow, but in the meantime, if I can get them running on linux, that would be great.

My primary linux box is a virtual machine under VirtualBox on a Mac running Ubuntu 18.04.1 LTS. I got lens running here by consulting this page. My notes are a bit more compact than the details at that page, and actually add crucial details now that it is hard to find legacy packages for tcl/tk.

  1. Get tcl and tk packages:
  2. Install those guys, following instructions like these, to wit:
    • sudo apt install ./name.deb
    • replace ‘name.deb’ with a package name; I assume you should tcl8.3 first, followed by tcl8.3-dev, tk8.3, and tk8.3-dev
  3. Choose where you will install lens. Personally, I like easy access to it right off my home directory in a folder called LENS.
  4. Download the code to that directory and unpack it:
    • sudo wget http://tedlab.mit.edu/~dr/Lens/Dist/lens.tar.gz
    • sudo tar zxf lens.tar.gz
    • sudo rm lens.tar.gz
  5. Replace every instance of CLK_TCK with CLOCKS_PER_SEC in the files in Src; a one-line way of doing this from this page:
    • sed -i 's%CLK_TCK%CLOCKS_PER_SEC%g' ./Src/command.c ./TclTk/tcl8.3.4/unix/tclUnixPort.h
  6.  In Src/system.h, comment out the “include <bits/nan.h>” line; those functions have been integrated into math.h, which is also included. Not doing this leads to errors at compile.
  7. Edit the Makefile. Minimally, replace the line “CFLAGS = -Wall -O4 -march=i486” with “CFLAGS = -Wall -O4“. I also had some weird problems where it was generating a HOSTTYPE directory for i586 that would not work that went away when I simply commented out every other HOSTTYPE section except the default one. Inelegant, but it worked.
  8. Then build it: sudo make all 
    • If it didn’t work, I’m sorry. That’s all I’ve got…
  9.  Then deviate slightly from the installation directions. In your ~/.bashrc, add these lines (and save it and then start a new terminal or ‘source ~/.bashrc‘):
    • export LENSDIR=${HOME}/LENS # or whatever your location is
    • export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${LENSDIR}/Bin
    • export PATH=${PATH}:${LD_LIBRARY_PATH}:${LENSDIR}/Bin
  10. You should now be able to execute lens anywhere by typing ‘lens

Do X in Y: Use a function to save many plots to a list in R

Do X in Y: Use a function to save many plots to a list in R

You know that miserable feeling when you realize you are copying and pasting snippets of code and modifying them with hard-coded variables? I usually ignore it and press on, but last night, I took the time to convert my code to a function. It was surprisingly easy (it took maybe 5 minutes), and had unexpected benefits. For example, the task I was doing required creating many (e.g., 8-30) ggplot objects, pushing them into a list, and then using multiplot to create PDFs. Every time I ran a chunk of code for one set of graphs, I found myself fidgeting while R Studio created each of the graphs in the plot window. My efforts to suppress that unwanted plotting were for naught, but when I converted to a function, all that extra plotting went away. Creating a 16-panel plot is probably 10x faster using the function! Here’s the code for the function (apologies to any programmers whose sensibilities I offend; I’m a hack, not a hacker).


#####################################################################################
# 2018.04.30, Jim Magnuson
library(foreach)
library(ggplot2)
library(scales)

plot.to.list <- function(dat, x.vars, x.names, y.vars, y.names, textsize=12,

jitteramount=0.1, ...) {
someplots <- list()
at = 0
foreach(xvar=x.vars, xname=x.names) %do% {

foreach(yvar=y.vars,yname=y.names) %do% {

# First, get correlation between the current pair
# NB: the 'get' command is crucial for evaluating the
# strings as variable names, but this doesn't work in
# the ggplot code below
acor = sprintf("%.3f",with(dat, cor(get(xvar),get(yvar)) ))
at = at + 1 # increment list position
# now add a ggplot object to the list
# NB: instead of 'get', we use 'aes_string' in place of 'aes'
someplots[[at]] <- ggplot(dat,aes_string(x=xvar,y=yvar)) +

geom_jitter(position=position_jitter(jitteramount)) +
geom_smooth(method='lm', se=FALSE, linetype="dashed") +
scale_x_continuous(breaks= pretty_breaks()) + # from scales, try it, you'll like it
scale_y_continuous(breaks= pretty_breaks()) +
theme(plot.title = element_text(hjust = 1)) + # right justify title
theme(panel.background = element_rect(colour="black", fill="white"),
axis.text.x = element_text(size=textsize, face="plain", colour="black"),
axis.text.y = element_text(size=textsize, face="plain", colour="black"),
axis.title.x = element_text(size=textsize, face="bold", colour="black", vjust=-.4),
plot.title = element_text(size=11),
axis.title.y = element_text(size=textsize, face="bold", colour="black", vjust=1)) +
xlab(xname) + ylab(yname) +
ggtitle(paste("r =",acor)) # plot r as title

}

}
return(someplots)

}


####################################################################################

 

yvars=c("RT","RT_lenC") # variables w/in trace.sub I want to be on y axes
ynames=c("RT", "Adjusted RT") # Better labels than the variable names
xvars=c("NB", "DEL", "ADD", "SUB") # more trace.sub vars that I want on x axes
xnames=c("Neighbors", "Deletions", "Additions", "Substitutions") # Better labels
# now call the function:
nb.plots = plot.to.list(dat=trace.sub, x.vars=xvars, y.vars=yvars,
x.names=xnames, y.names=ynames)

# now create a PDF with all the plots
pdf("trace_neighbor_types.pdf",height=7,width=13.7)
# get multiplot function here:
#      http://www.cookbook-r.com/Graphs/Multiple_graphs_on_one_page_(ggplot2)/
multiplot(plotlist = nb.plots, cols=4)
dev.off()

Brand new publication: TISK 1.0

Okay, this one is actually new — it just appeared online today.

You, H. & Magnuson, J. S. (2018). TISK 1.0: An easy-to-use Python implementation of the time-invariant string kernel model of spoken word recognition. Behavior Research Methods. doi:10.3758/s13428-017-1012-5 [PDF]

This documents Heejo You’s beautiful re-implementation of Thomas Hannagan’s original TISK code. We were sad that Thomas could not join us in this paper (he has a new job in industry that precluded that), but we are immensely grateful to him for his help and advice.

New publication: Feedback helps

I am very pleased to announce (belatedly) that the lab has a new paper out in Frontiers:

Magnuson, J. S., Mirman, D., Luthra, S., Strauss, T., & Harris, H. (2018). Interaction in spoken word recognition models: Feedback helps. Frontiers in Psychology, 9:369. doi:10.3389/fpsyg.2018.00369 [HTML]

This paper was a very long time in the making. This project inspired the jTRACE re-implementation of TRACE, and previous attempts at publication were stymied. The upshot of the paper is that feedback in a model like TRACE affords graceful degradation in the face of noise.