top of page

Poster Presented at the Princeton Neuroscience Institute Summer Research Poster Session

Baby Lab Poster.png

Oral Presentation for The Leadership Alliance National Symposium (LANS)

Slide1.jpeg
Slide2.jpeg
Slide3.jpeg
Slide4.jpeg
Slide5.jpeg
Slide6.jpeg
Slide7.jpeg
Slide8.jpeg
Slide9.jpeg

Oral Presentation Transcript

Today I will discuss how the lateral occipital cortex (or LOC) is activated differently in infants when they view a static image compared to when they watch a video of an object being explored. This will be measured using functional near-infrared spectroscopy.

 

Developmental changes during the first year of life lay down the building blocks for complex visual perception. The development of an infant’s ability to perceive static images is relatively well-understood and neural correlates are documented. However, there is a lack of research on what happens in the brain when infants visually explore objects and how this perception of dynamic objects develops over time.

 

The visual region that we are primarily interested in is the lateral occipital cortex, which has been shown in previous studies to be selective for shape. We hypothesize greater LOC activation to first-person dynamic stimuli compared to static stimuli since object motion and rotation cause perceived shape to change. The stimuli that we will use are 10 ten-second videos yoked to 10 static images of toys.

 

The main aspect of our study that differentiates it from previous studies in infant visual perception is the use of real-world first-person stimuli that closely resemble what babies actually perceive on a daily basis. Many studies in psychology and neuroscience rely on static images on a screen to make conclusions, but how applicable are these conclusions to the real world? Video data was collected by the Computational Cognition and Learning Laboratory at Indiana University for a paper called Toddler-Inspired Visual Object Learning. This lab utilized head-mounted cameras and eye-gaze trackers. The egocentric, or first-person view that the head-mounted cameras capture is crucial for making babies feel as though they are the ones exploring the object.

 

As you can see from this video screenshot, the visual world that a baby perceives is markedly different from the visual perspective of an adult. For example, a child’s visual experience is much more focused on one or a small set of objects and is more likely to include hands than a face. This is another reason that using videos from an infant’s perspective, rather than just a video of someone exploring an object, is vital to the ecological validity of our study.

 

The lateral occipital cortex is a mid-level visual region. Aspects of mid-level vision include color, form and movement. Previous research has suggested that the LOC is functionally similar to adults by 5–8 years of age. Two years ago, our lab found that it is selective for shape in six-month-old infants. When selecting which videos would be used in our study, we excluded any faces to prevent activation of the adjacent fusiform gyrus. Studies have found elevated activation in the LOC itself for faces. Interestingly, the LOC is even more activated when a subject looks at inverted faces. In a study that showed subjects either faces or naturalistic scenes, the LOC was the only visual area that had heightened activation to both. Visual scientists see it essentially as the “wild card” of the visual system due to its unpredictability.

 

You may be wondering: if we already know that the LOC is selective for shape in six-month-old infants, why conduct another LOC study in the same age group? The top figure shows the stimuli used in the study that our lab conducted in 2017. Each static shape is completely different from one another and therefore, the identity of the object changes. Notice that the changes in shape are discrete. However, in our study, the changes in object shape are continuous because the videos allow a subject to see the shape change in a smooth, uninterrupted motion. Importantly, the same object is changing shape, and a within-objects test will provide insight as to whether the LOC is truly tracking object shape rather than simply object identity. Being able to recognize objects from different angles and points of view is a crucial skill for infants to develop in order to effectively navigate their world.

 

Shape is a critical aspect of vision for a variety of reasons. We all use shape as a cue for object identification. Evolutionarily, being able to identify objects is key for survival. For example, detecting a long, narrow, winding shape in the grass alerts us to the presence of a snake. Shapes are also a way for infants to begin categorizing objects, and how infants categorize objects can later affect word learning. The shape bias is the tendency for infants and children to generalize information about an object by its shape, rather than by its color, material, or texture when learning nouns. The shape bias can be learned, and is displayed when a child consistently organizes objects by their shapes instead of by other features of the objects. Infants or children who have learned the shape bias tend to learn nouns at a faster rate than infants who have not learned the shape bias. 

 

The ability to perceive shape becomes extremely important when children learn how to read and need to distinguish between different letters. Children who have extreme difficulty with perceiving shape are often diagnosed with learning disorders, like dyslexia. It is common for dyslexic people to confuse the lowercase letters for “b” and “d.” If you think about letters like objects, "b" and "d" are just the same object rotated 180 degrees, which underscores the importance of studying how the visual system interprets object rotation and how it is able to discriminate between different views of the same object.

 

One study found a correlation between understanding of shapes and use of spatial language, which includes prepositions related to location (words like “up” and “down) and deictic terms like "here" or "there." Understanding shape is the first step for developing spatial awareness, which is a necessary skill for excelling in engineering and mathematics. This understanding combined with the development of spatial language contributes to comprehending spatial transformations and analogies. I’ve included some neuropsychological tests to clarify what I mean by this. In the Spatial Transformation task, one must predict which shape would be made on the right with the two shapes on the left. The Spatial Analogies test asks a subject to match one of the bottom pictures with the top picture, and since we all have developed spatial intelligence, we know that the third picture is correct.

 

Why fNIRS? Let’s first start out with what fNIRS is. fNIRS stands for functional near-infrared spectroscopy and records the same physiological data as fMRI technology. FMRI uses radio wave pulses in a static magnetic field to record what is called the blood-oxygen-level-dependent, or BOLD, response that varies with fluctuations in local concentrations of deoxygenated hemoglobin. FNIRS also measures this BOLD signal by employing two wavelengths of near-infrared light instead of magnetism. Participants wear a cap embedded with detectors and emitters of near-infrared light, similar to how a pulse oximeter works. A NIRS channel is formed by a detector-emitter pair within which cortical hemodynamic responses can be recorded.

 

Near-infrared light is applied to the scalp via optical fibers that emit photons. Detectors absorb the fraction of photons that return to the surface of the scalp after passing through several layers of tissue, including the skull, CSF fluid and vascular tissue. As you can see in the figure, the photons travel on a hammock-shaped trajectory, dipping into the cortex and emerging at the scalp to be collected by a detector. The distance between the detector and the emitter determines the trajectory of the photons; when they are farther apart, the photons dive deeper into the cortex. Greater distance also leads to greater attenuation of the signal. Therefore, standard channels are typically 3 cm apart.

 

Although near-infrared light might sound scary, it is perfectly safe at the frequency used. The amount of light that goes into the brain is about the same amount of light that goes into the brain when you walk outside on a sunny day. Noises produced by an MRI machine can be both distracting and damaging to the ears. On the other hand, fNIRS is completely silent. MRI requires a rigid head stabilization and can only tolerate movement of a few millimeters. Obviously, babies move around a lot, so fNIRS’s ability to tolerate motion is crucial. Babies are tested in a non-restrictive environment, safe in their parents arms and are not put into a claustrophobic environment, like an MRI machine. FNIRS also has spatial localization superior to EEG. We wouldn’t be able to make any inferences about the LOC specifically using EEG.

 

We conduct the most rigorous MR coregistration in the infant fNIRS field to date. This allows us to localize the NIRS channels recorded on each baby to their cortical areas based on age and head size appropriate atlases. We then select channels over the LOC and localize the LOC based on several adult studies and their MNI coordinates. 

 

We are currently selecting videos from a large corpus of in-lab play sessions. We have spent weeks coding and recoding dozens of videos in a software application called Elan to determine the best possible clips of objects in true exploration, minimizing distraction and maximizing exploration. After I coded the videos, I consulted with another coder and we resolved any discrepancies within our code. As you can see by the picture, variables were coded in a binary manner as either present or absent. Any variable starred is one that we want to maximize and variables without a star are to be minimized. I’m happy to answer questions about any of these variables. One variable that I would like to focus on is object rotation, which we operationally defined as when an object moves at least 90 degrees from the last time the object was stationary, and this motion must be continuous. This definition can also include when parts of an object are rotated, as seen with the Rubik's cube. Additionally, body parts were minimized in order to prevent activation of the fusiform body area, or FBA, which is part of the fusiform gyrus.

 

To determine if our results are significant, we will conduct a t-test to see if there is significantly more oxygenated hemoglobin in the LOC region of interest while watching the videos compared to while viewing the static images. In future studies, we would be interested to see how the LOC is activated when infants actually explore objects in real life instead of watching them on a screen. Since previous studies have only been conducted in adults, 5-8 year olds and six-month-old infants, we would also like to study LOC development in age groups between 6 months and 5 years.

 

I would like to thank the people and organizations displayed for making this research possible. Thank you for listening and I hope you all have a great conference!

Abstract for Leadership Alliance National Symposium

The first year of life is marked by rapid, significant changes in visual perception. The development of an infant’s ability to perceive static images is understood and the neural correlates are documented. However, there is a lack of research on what happens in the brain when infants visually explore objects and how this perception of dynamic objects develops over time. We will record neural activity in the lateral occipital cortex (LOC), a mid-level visual region, while infants (5-6 months) watch videos of other infants manually exploring an object from a first-person perspective compared to while infants view a static image of that object. The LOC is selective for object shape in both adults (Kourtzi & Kanwisher, 2001) and in six-month-old infants (Emberson et al., 2017). Shape is developmentally critical for object identification (Wilcox, 1999) and early word learning (Smith et al., 2002). We yoked static images of toys with videos of infants (18-24 months) exploring these toys. Videos were collected by attaching cameras to eye tracking glasses that infants wore during an in-lab play session (Bambach et al., 2018). We will record LOC activity using functional near-infrared spectroscopy (fNIRS), a safe, non-invasive neuroimaging modality for studying infant populations. We will utilize a highly accurate MR coregistration method to neuroanatomically locate fNIRS sensors and use only those consistent with a meta-analysis of the LOC in adults (Emberson et al., 2017). We hypothesize greater LOC activation to first-person dynamic stimuli compared to static stimuli since motion and rotation cause perceived object shape to change.

bottom of page