Robert J. and Nancy D. Carney Institute for Brain Science

Community Spotlight: Elena Luchkina

Elena Luchkina is a research scientist at Harvard University working with Dr. Elizabeth Spelke. She is also a research associate at UC Berkeley where she collaborates with Dr. Fei Xu. She received her Ph.D. in Cognitive Science from Brown University under the supervision of Drs. James Morgan and Dave Sobel in 2019 and completed postdoctoral training with Dr. Sandra Waxman at Northwestern University.

A woman with wavy brown hair in a scarf smiles for a portrait in a wooden room
Elena Luchkina is a research scientist at Harvard University working with Dr. Elizabeth Spelke. She is also a research associate at UC Berkeley where she collaborates with Dr. Fei Xu. She received her Ph.D. in Cognitive Science from Brown University under the supervision of Drs. James Morgan and Dave Sobel in 2019 and completed postdoctoral training with Dr. Sandra Waxman at Northwestern University.  

Carney Institute (CI): Tell us a bit about yourself.   

Elena Luchkina (EL): I grew up in Russia close to the Ural Mountains and moved to Moscow when I was nine. I did my undergraduate studies there in crisis management and was gearing up for a career in asset management. At the same time, I'd always been interested in human cognition, especially at its intersection with language. But when you grow up in a country that keeps having financial, social and political crises, science is just not the direction you often think of.  

After I graduated, I went to MIT for a master’s degree in finance. MIT has a several week period in the winter where you can take a class in any field. I took my first neuroanatomy course and was immediately captivated. From there I began contacting labs to see if I could get some experience working on human cognition. I was extremely fortunate because Edward Gibson was looking for native Russian speaker for one of his experiments looking at syntax in spoken language and in gestures.   

That year, I started applying for grad school in neuroscience but, with little experience, no labs wanted to hire me or accept me as a grad student. Again, I was serendipitously fortunate because Athena Vouloumanos at NYU was the only professor who interviewed me and agreed to let me work in her lab. This experience, along with additional coursework, made a huge difference when I reapplied to grad school, and I was accepted to Brown.  

My experience at NYU had a profound influence on my research focus in grad school. Until then, I was mostly interested in working with adults, but realized just how many questions are deeply rooted in early development. For example, how do we begin to develop language? How do we discover that language is communicative? How do we begin to link words to representations of things that are not around us or do not have physical form (e.g., beliefs)? To explore these questions, I decided to take the developmental approach.   

At Brown, I was accepted to be a student with Jim Morgan but because he was on sabbatical in my first year, I was temporarily adopted by David Sobel. Prof. Morgan’s work was a lot more leaning towards the language side, whereas Prof. Sobel was focused on causal learning and selective social learning, for example, how children infer that someone is a better source of knowledge than other people. Combining the insights and mentorship from both labs, I ended up conducting research on how infants and young children observe other people's use of language and decide whether those people are reliable sources of new lexical knowledge.   

CI: Is your research language specific or does it have application across all language groups?  

EL: I hope my findings are generalizable across languages because the questions I ask are about the foundational aspects of language and cognition. Given that and given our access to participants, everything that I did was with English and specifically English acquiring monolingual infants and English-speaking young children. That said, it's entirely possible that there are differences across languages or cultures, especially when it comes to making inferences about whether someone is a reliable source of information. The language that you use and whether you acquire more than one language can affect such inferences. Some languages tend to have more words that are polysemous (have many meanings). Some languages have fewer of these types of words. Learning more than one language and having multiple words for the same item can also make a difference.  

CI: What does the spectrum of language acquisition look like for a young child?   

EL: Infants begin learning words fairly soon after they're born. By six months they have a little bit of knowledge of some highly familiar words like the names of body parts and they tend to know their own name. It's not clear to me if, at that point, they understand that words are used for referential communication.  

We do know that by about 12-14 months they begin to understand a few dozen words and even say some of their first words. And this is the time when we see evidence of infants understanding that words are communicative and referential. At this age, infants also begin to recognize that words can communicate about something unobservable, such as intentions. And they begin to understand that words can refer to something that is not present around them (e.g., a toy that has been hidden, a parent who is not in the room). 

The period from 15 to 19 months I think is especially interesting to me because this is when language development truly takes off and infants begin to understand that words are extremely powerful for communication. Words at that point can describe things that the infant has never seen and can use words that they already know to infer meanings of new words.  

CI: Are sociocultural factors influencing this development?  

EL: Given how we design studies and partly because of our access to specific samples, I am typically not finding effects of social factors and demographic factors. Our samples tend to be fairly homogenous. They tend to be from the same socio-economic status. Most of them represent the middle and upper middle classes. Most infants that I have tested have college educated parents and many have parents with degrees beyond college. And most of our experimental population ends up being white, middle-class, full-term babies. In part, these are biases that are introduced by who is available for a study during the work week.  

Infants in my studies are also typically monolingual English-acquiring. Because we know the most about monolingual developmental trajectories, it's harder to judge what's going on when you are acquiring multiple languages, multiple syntactic structures, et cetera. I do emphasize diversity where I can and where it is most relevant to the questions being asked. For example, with my collaborators from UC Davis, Meghan Miller and Tonya Piergis, I’m looking at parent-infant interactions among infants who are likely to be diagnosed with autism and then tracking the language development of those infants. The most interesting phase of the analysis will come when these infants will or will not receive an autism diagnosis later;they have a higher probability than a baseline cohort  because they have a sibling with a diagnosis.   

CI: Tell us about what your experimental design looks like.   

EL: Most of my studies rely on looking behavior because I work with infants–a population that we cannot ask what they know and think, that can't fill out surveys or reliably respond to our complicated verbal prompts. Instead, to evaluate whether an infant knows the meaning of a particular word, we're using an intermodal preferential looking procedure; intermodal, because we use video and audio, and preferential looking because we are showing them more than one image on a screen. We're measuring where the infant would look. Depending on the audio prompt that we play, infants may look one way or another.   

CI: What’s the translational nature of your work?   

EL: One thing we're trying to see is whether the early capacity to learn a word for an object that's not visible predicts our ability to learn new information from language later on. Most of the time when we use language, as you and I are doing right now, we're not talking about what's around us. We're not walking around labeling your actions or my actions or objects that are in our environment. Rather, we're talking about things that we have no perceptual access to. That's learning from language and that's how most formal and informal instruction is structured. So, we are trying to see if infants who have an early difficulty linking words with imagined (but never-seen) objects will have difficulties with learning new facts from language by age two. If we observe such an effect, then perhaps we can design a diagnostic tool that would help us predict learning difficulties that are grounded in language. We may also be able to adapt our experimental design into diagnostic tools for atypical development like autism.  

CI: Tell us how you’re integrating modeling into your research. 

EL: Most of my lines of work are looking to either establish or replicate recently established effects. It’s a bit early to begin modeling the cognitive processes and capacities I investigate. However, in addition to conducting my empirical work, I'm also running a group called the Social Contingency Consortium where we work on integrating the science of interactions and their effect on learning. We do have a fairly large contingent of modelers in that group. Inspired by this work, I hope to translate my research into something that can be modeled when ready.  

Once we know more about how infants learn to connect their conceptual knowledge with language, there could be interesting implications for machine learning and artificial intelligence. To my knowledge, large language models don’t yet acquire any sort of conceptual level knowledge. I hope that changes. Perhaps, when we know more about the way in which our ability to map language onto conceptual knowledge gets off the ground, we will be able to train AI on data sets that are drawn from the combination of observations about the world, observations of others' interactions with each other and observations of language. Then perhaps machines will be able to conceptual knowledge mapped to language.   

But this is still a difficult question. We still do not quite understand how humans discover that language serves referential communication and how they begin to map their conceptual knowledge to it. People have been asking questions like these for ages. You can go back to Greek philosophers. So, I'm trying to tackle these hard questions and see if we can make some progress towards a better understanding.