A history of reward causes a decrease in error corrections in sensorimotor adaptation

Researcher(s)

  • Larissa Chelius, Computer Science, University of Delaware

Faculty Mentor(s)

  • Joshua Cashaback, Biomedical Engineering, University of Delaware

Abstract

An expert archer will tune their shot after a miss, but a novice may completely alter their technique. Both the immediate error and that individual’s history of success influence their motor behavior. In fact, multiple learning processes govern how individuals learn from feedback. Reinforcement learning is a process where a person learns to make decisions by performing actions in an environment to maximize cumulative rewards. Error-based learning improves performance by minimizing the difference between predicted outputs and actual outcomes. Despite extensive research on these learning processes in isolation, understanding is limited on their interactions. Here we examine how a history of reward influences error corrections.

In our study, the Expected Value Model and the Reward Prediction Error Model predict the outcome of the reinforcement learning task.We address this gap through simulating these models.The reward history and reach executions reveal how the learning processes may interact computationally. Influenced by past reward, the Reward Prediction Error Model predicts more adjustments after a miss compared to the Expected Value Model.This difference highlights the qualitative differences in the models response to errors based on historical reward data.We collected experimental data from participants performing a reaching task measuring adjustments after errors and reward-based learning.

The data collected aligns with the prediction of the Expected Value Model, suggesting that the human motor system follows consistency when paired with success, disregarding the occasional error. This has significant implications for developing targeted rehabilitation strategies for motor disorders and improving educational methods for skill acquisition.