From MIT, Fedorenko Language Lab


Project description: Broadly, we were interested in how the brain represents combinations of concepts spanning the entire semantic space.  We used algorithms to go from "what parts of your brain light up" to "what concept are you looking at".

What I did: This was my main project at MIT. I had three main responsibilities.

  • Data collection. I wrote the presentation scripts, collected almost 170 hours of fMRI scans, and was the primary handler of this data. So, in addition to my own analyses, I also "packaged" these results in various ways for collaborators within MIT and at other colleges like Princeton and Ohio State.   

  • Data analysis. I implemented and ran two new analyses. The first looked at the effect of syntactic category on brain activation (are there parts of the brain that care more about nouns than verbs?).  The second compared brain activation similarity with word vector similarity between concepts (do similar words lead to similar parts of the brain lighting up?).

  • Reporting and visualizing results. I edited the papers (cited below) and prepared the tables and figures that were used in these publications.

Publications (so far): 


Project description: This project builds on the one above. Our basic question was if you record someone's brain while they're reading two different sentences, can you tell how similar the sentences are based on how similar their brain activation patterns are?  Excitingly, the answer is (very tentatively) yes.

What I did: I  led this project from its conception through to an early draft of the associated paper.

  • Experiment design. I helped design the task and made final decisions on presentation parameters. I also put together a set of pictures to use as stimuli, creating a total of 500 stimuli.

  • Data collection. I ran 30 subjects on the task (10 for a pilot and then 20 to confirm our findings). 

  • Analysis design. I wrote the script to run two types of analysis (univariate and multivariate RSA-style) on the data. 

  • Reporting resutls. Finally, I wrote up a Methods and Results section for this experiment, including tables and figures, for current and future lab members to expand upon and publish.


Project Description: We asked the question do the parts of the brain that care about language also care about watching others perform actions?  We took a novel approach that specifically accounts for the fact that one person's language area is in a different part of the brain than another person's (also called functional localization, details here). Using this approach we find that no, language regions don't care about action observation, going against the current literature. 

What I did: I was first author on this paper, so I was involved in everything post-data collection. This included:

  • Re/analyzing the data: We looked at 4 experiments done over the course of 8 years with 90 participants, so I figured out a way to analyze this data in a way that allowed us to compare across experiments.

  • Reporting our findings: I conducted a literature review on related research and outlined the paper, framing our research to emphasize its relevance and novelty. Then I wrote the bulk of the paper, with help from my PI, working with my co-authors to ensure accuracy in the representation of this data. 

  • Visualizing results. I also created all of the figures and tables in the paper, such as the one on the right.


Pritchett, B., Hoeflin, C., Koldewyn, K., Dechter, E., Fedorenko, E. High-level language processing regions are not engaged in action observation or imitation. Manuscript submitted.