Skip to content

TimCSheehan/SheehanSerences2022

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Sheehan & Serences 2022

Attractive serial dependence overcomes repulsive neuronal adaptation

(Commentary)

Tags: perception, neural decoding, custom model fitting, dimensionallity analysis. Tools Used: regression, mixed-effects linear model, regularization, cross-validation, PCA, deconvolution. Packages Used: matplotlib, pandas, seaborn, scipy, numpy, scikit-learn, statsmodels.

Summary

With this project, I examined a tendency for human observers to perceive the world as more stable than it actually is -- serial dependence. We first demonstrated that this bias was Bayes optimal in that it allowed participants to reduce errors under high uncertainty. We then recorded brain activity (fMRI) while participants completed a task designed to elicit this bias. We decoded the item they were holding in memory based on neural activity patterns in visual cortex using a novel circular regression technique. By examining the residual errors of this model, we were able to test whether the biases observed using this technique matched perception (attraction towards the previous item). We were surprised to find that visual cortex representations were instead repelled from the previous item, reflecting sensory adaptation. After many steps to confirm this bias was not an artifact of the normal time course of fMRI or our analysis procedure, we sought to make sense of our result using a simulated observer. We found that a spiking neural network model that prioritizes changes at encoding (adaptation) and stability at decoding (temporal integration) could vastly reduce energy usage while improving precision for stimuli emerging from naturalistic processes by leveraging stability in natural scenes.

This work suggests that human perception prioritizes change detection at encoding while ensuring stable representations informed by priors at later stages of processing. This is in line with predictive coding and "outside-in" models of human perception. In the context of more generalized forms of intelligence, optimal approaches may favor sparse coding of stimulus changes across time, allowing later layers to encoded historical context.

drawing

Code Highlights

Data

All data available at https://osf.io/e5xw8/.

About

Code for Sheehan & Serences 2022

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published