The researchers at the Centre for Neuroscience, and the Department of Computer Science and Automation, Indian Institute of Science (IISc), have developed a novel computational model of eye movement that could predict a person’s ability to detect changes in their visual environment. This study was published in the PLOS Computational Biology (PLOS stands for ‘Public Library of Science’). The insights into understanding change blindness, provided by the model, could help scientists better understand visual attention and its limitations.

Change Blindness

Change blindness is a phenomenon of visual perception which occurs when a stimulus undergoes a change without this being noticed by its observer. It is defined as the failure to detect when a change is made to a visual stimulus. Change blindness is the result of an individual being blind to the changes occurring to an object with which they are actively engaged. Due to this, when missed changes are pointed out to the observer at a later point of time, they are usually met with a sense of disbelief at how something could ever have been missed. Recently, change blindness has also been shown to occur in more naturalistic scenarios.

Change blindness results from a disconnect between the assumption that our visual perceptions are so detailed as to be virtually complete, and the actual ability of the visual system to represent and compare scenes moment-to-moment. It is a testable phenomenon that could be used to investigate the nature of visual representations in different conditions.

Research and its Findings

The research group of IISc studied the phenomenon of overlooking a visual change or change blindness. Human brains have the remarkable ability to pay attention to the details. However, sometimes, they fail to notice even marked differences. The team first checked for change blindness among 39 people by showing them an alternately flashing pair of images that had a minor difference between them. The researchers expected that there would be some complex differences in eye movement patterns between subjects who could do the task well and those who could not. However, they found some very simple gaze-metrics that could predict the success of change detection. They found successful change detection to be linked to two metrics: how long the subject’s gaze was fixated at a point, and the variability in the path taken by their gaze between two specific points, called saccadic amplitude. (Saccadic amplitude refers to the distance travelled by the eye between two fixation points: When making a saccade towards an intended target, the amplitude is generally such that the eyes land close to the object.) Subjects who fixated for longer at a particular spot, and whose eye movements were less variable, were found to detect changes more effectively.

Based on these observations, the researchers developed a computational model that could predict how well a person might be able to detect changes in a sequence of similar images shown to them. Various biological parameters, constraints, and human bias were considered in the model. Since biological neurons are noisy, they do not encode the image precisely. Hence, there is a lot of variability in the way neurons encode, process, and/or respond to images in the brain. This could be captured by a mathematical representation which is called ‘Poisson process’. (A Poisson process is a model for a series of discrete event where the average time between events is known, but the exact timing of events is random.)

The present model combines previously developed models that focus on eye movement and on change detection. The research team also tested their novel computational model of eye movement against a state-of-the-art deep neural network, known as the DeepGaze II. It was found that the novel computational model of eye movement had performed better at predicting human gaze patterns in free viewing conditions, when the subjects were casually viewing the images. The DeepGaze II model could only predict where a person would look, if presented with an image. It could not predict the eye movement pattern of a person searching for a difference in the images. However, the computational model, performed both the functions.

Besides just predicting where a subject would look, this novel computational model also considers the goals of the subject when they view images. With this invention, the researchers are also planning to incorporate artificial neural networks with memory into the model. This would more realistically mimic the way human brains retain recollections of the past events to detect changes.

The researchers are of the view that the insights into understanding change blindness through this novel computational model, could help scientists understand change blindness and visual attention and its limitations. These insights could be applied in diagnosing neurodevelopmental disorders like autism, improving road safety while driving, and enhancing the reliability of eyewitness testimonies.

© Spectrum Books Pvt Ltd.

error: Content is protected !!

Pin It on Pinterest

Share This