Do X in Y: Convert values to subject-relative z-scores

I set out to do something that seemed like it shouldn’t be too hard in R. I had a dataframe with RTs for a bunch of subjects, and I wanted to convert the RTs to z-scores relative to each subject’s own mean. To do this relative to the global mean is super easy:

data$global_zRT <- scale(data$RT)

However, getting it scaled by subject mean (which could be useful for visual inspection of data, or for some analyses) turns out not to be trivial, and I was unable to find relevant posts via google search. Before posting to stackoverflow myself, I tried our lab Slack channel. Dave Saltzman and Anne Marie Crinnion produced a solution quickly with dplyr. However, my dplyr calls were getting blocked by plyr. Anne Marie pointed out how to make the command bullet proof. Note that ‘subject’ here is a column in the dataframe, not a keyword of some sort.

data$zRT <- data %>% dplyr::group_by(subject) %>% dplyr::mutate(zRT = scale(RT))

— Jim Magnuson

New-ish publication

In this article led by Sahil Luthra, we introduce a new model of print-to-(over-time) speech.

Luthra, S., You, H., Rueckl, J. G., & Magnuson, J. S. (2020). Friends in low‐entropy places: Orthographic neighbor effects on visual word identification differ across letter positions. Cognitive Science, 44(12). ee12917. https://doi.org/10.1111/cogs.12917

New publication — EARSHOT model of human speech recognition

This brief report has been YEARS in the making. Largest team on any  publication from our lab? Congrats especially to Heejo, Sahil & Monica!

Magnuson, J.S., You, H., Luthra, S., Li, M., Nam, H., Escabí, M., Brown, K., Allopenna, P.D., Theodore, R.M., Monto, N., & Rueckl, J.G. (2020). EARSHOT: A minimal neural network model of incremental human speech recognition. Cognitive Science, 44, e12823. http://dx.doi.org/10.1111/cogs.12823 [PDF] [Supplementary Materials]

3 lab presentations / proceedings publications at CogSci2019

We had 3 presentations/papers at CogSci2019.

  1. Magnuson, J.S., Li, M., Luthra, S., You, H., & Steiner, R. (2019). Does predictive processing imply predictive coding in models of spoken word recognition? In A.K. Goel, C.M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (pp. 735-740). Montreal, QB: Cognitive Science Society. [PDF]
  2. Magnuson, J.S., You, H., Rueckl, J. R., Allopenna, P. D., Li, M., Luthra, S., Steiner, R., Nam, H., Escabi, M., Brown, K., Theodore, R., & Monto, N. (2019). EARSHOT: A minimal network model of human speech recognition that operates on real speech.  In A.K. Goel, C.M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (pp. 2248-2253). Montreal, QB: Cognitive Science Society. [PDF]
  3. McClelland, J.L., McRae, K., Borovsky, A., Kuperberg, G., & Hill, F. (2019). Symposium in memory of Jeff Elman: Language learning, prediction, and temporal dynamics. In A.K. Goel, C.M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (pp. 33-34). Montreal, QB: Cognitive Science Society. [PDF]

New paper from Monica Li et al.

After years of dedicated work, Monica Li (with support from her co-authors) has published a terrific new paper in the Journal of Memory & Language:

Li, M.Y.C., Braze, D., Kukona, A., Johns, C.L., Tabor, W., Van Dyke, J. A., Mencl, W.E., Shankweiler, D.P., Pugh, K.R., & Magnuson, J.S. (2019). Individual differences in subphonemic sensitivity and phonological skills. Journal of Memory & Language, 107, 195-215. https://doi.org/10.1016/j.jml.2019.03.008 (links at publications page)

In addition to an epic set of experiments and individual differences measures (and implications for whether phonological processing is unusually precise or imprecise in individuals with lower reading ability), Monica provides a direct comparison between growth curve analysis (GCA) and generalized additive models (GAMs).

Congrats, Monica!

Do X in Y: Install lens in Ubuntu linux

Doug Rohde’s lens (light, efficient, neural simulator) is an awesome tool. However, given that it has not been actively maintained since 2000, its shelf-life is probably limited. I still have some legacy projects that were developed in lens (mainly using SRNs) and like to be able to re-run and tweak them. At some point, I’ll move them to tensorflow, but in the meantime, if I can get them running on linux, that would be great.

My primary linux box is a virtual machine under VirtualBox on a Mac running Ubuntu 18.04.1 LTS. I got lens running here by consulting this page. My notes are a bit more compact than the details at that page, and actually add crucial details now that it is hard to find legacy packages for tcl/tk.

  1. Get tcl and tk packages:
  2. Install those guys, following instructions like these, to wit:
    • sudo apt install ./name.deb
    • replace ‘name.deb’ with a package name; I assume you should tcl8.3 first, followed by tcl8.3-dev, tk8.3, and tk8.3-dev
  3. Choose where you will install lens. Personally, I like easy access to it right off my home directory in a folder called LENS.
  4. Download the code to that directory and unpack it:
    • sudo wget http://tedlab.mit.edu/~dr/Lens/Dist/lens.tar.gz
    • sudo tar zxf lens.tar.gz
    • sudo rm lens.tar.gz
  5. Replace every instance of CLK_TCK with CLOCKS_PER_SEC in the files in Src; a one-line way of doing this from this page:
    • sed -i 's%CLK_TCK%CLOCKS_PER_SEC%g' ./Src/command.c ./TclTk/tcl8.3.4/unix/tclUnixPort.h
  6.  In Src/system.h, comment out the “include <bits/nan.h>” line; those functions have been integrated into math.h, which is also included. Not doing this leads to errors at compile.
  7. Edit the Makefile. Minimally, replace the line “CFLAGS = -Wall -O4 -march=i486” with “CFLAGS = -Wall -O4“. I also had some weird problems where it was generating a HOSTTYPE directory for i586 that would not work that went away when I simply commented out every other HOSTTYPE section except the default one. Inelegant, but it worked.
  8. Then build it: sudo make all 
    • If it didn’t work, I’m sorry. That’s all I’ve got…
  9.  Then deviate slightly from the installation directions. In your ~/.bashrc, add these lines (and save it and then start a new terminal or ‘source ~/.bashrc‘):
    • export LENSDIR=${HOME}/LENS # or whatever your location is
    • export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${LENSDIR}/Bin
    • export PATH=${PATH}:${LD_LIBRARY_PATH}:${LENSDIR}/Bin
  10. You should now be able to execute lens anywhere by typing ‘lens