Visual Biases Near the Hands Help Us Perform Specific Actions

High angle view of group of people sitting at the conference table, discussing, brainstorming. Digital tablets, smart phones, notebooks, coffees on the table.

Using your hands to perform tasks in specific ways can change the way you see things near your hands, according to new research published in Psychological Science, a journal of the Association for Psychological Science. The research shows that learning to grasp an object with the backs of the hands made participants more sensitive to motion near their hands, while learning a to pick up an object with their fingertips enhanced participants’ perception of spatial detail near their hands.

“These results support the idea that vision is, in a sense, tuned for action,” says psychological scientist Laura Thomas of North Dakota State University, author of the study. “This evidence suggests that people experience flexible visual biases when viewing information within reach that may aid the actions they are prepared to take.”

Previous studies have indicated that our visual biases – the visual information we’re attuned to at a given time – may adapt to the immediate context. When participants in one study adopted a power-grasp posture, they showed greater sensitivity to motion-related information that would allow them to make a quick and forceful grab. When the participants adopted a precision-grasp posture, they were more sensitive to detailed spatial information that would help them manipulate something small.

These findings suggest that the visual system prioritizes processing information that is relevant to the actions we are likely to take, allowing us to be more effective power or precision graspers, depending on the situation.

Thomas wondered whether a short-term training experience might be sufficient to induce these action-oriented visual biases. To find out, she conducted two experiments in which participants learned to use their hands in new ways to pick up objects.

In the first experiment, 60 student participants completed two visual perception tasks, as their hands either rested on the display or in their laps. In one task, they saw a group of moving dots—some of which moved in the same direction—and had to indicate the direction in which the dots appeared to be moving. In the other task, they saw another group of stationary dots—some of which were arranged in a particular spatial pattern—and had to say whether the dots appeared in a radial or concentric spatial pattern.

The participants then practiced a new type of power grasp, using the backs of their hands to pick up and move a plunger.

After this training, the participants completed the two visual perception tasks again.

In the second experiment, another group of 60 student participants followed the same procedure – this time, they learned a new type of precision grasp, using the tips of their little fingers to pick up and move one bean.

The results of the two experiments showed that potential action seemed to drive the way the participants processed visual information, but only when that information was near their hands.

Participants were better at identifying the direction in which the dots were shifting after learning the back-of-the-hands power grasp, but only when their hands were near the display.

And they were more accurate in identifying the spatial pattern of dots after learning the finger-tips precision grasp – again, only when their hands were near the display.

Together, these findings suggest that our visual system flexibly adapts to recent experiences, allowing us to take action in effective and appropriate ways.

Thomas is planning further research to examine this phenomenon, investigating whether the kinds of visual biases she has observed in lab-based tasks also emerge when people look at and interact with everyday objects.

“The more we learn about how visual processing biases near the hands operate, the more potential we have to make recommendations about how to best present information on handheld devices, like smartphones and tablets, to meet users’ information processing goals,” she says.

This material is based on work supported by the National Science Foundation under Grant BCS 1556336 and by a Google Faculty Research Award.

All data have been made publicly available via the Open Science Framework and can be accessed at The complete Open Practices Disclosure for this article can be found at This article has received the badge for Open Data. More information about the Open Practices badges can be found at and

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.