Physical tasks the may seem basic to humans can be much more complicated for machines. The sheer number of degrees of freedom and dimensions the brain must use to make something as versatile and complex as hand motions means that the brain must reduce these high-dimensional spaces to achieve these feats of motion. This is a method of taking complicated processes such as the movement of individual fingers and finding ways of classifying these movements so that they may be accessed without as much information as they would otherwise take. Computational neuroscientists Yuke Yan and James G. Goodman along with biologist Sliman J. Bensmania observed the musculoskeletal movements of human and monkey hand kinematics during sign language, typing, and grasping objects with a principal component analysis. This let them figure out how the motions were specific to certain conditions such as the object trapped or the letter signed using linear discriminate analysis to classify different conditions.
They removed components in descending order of variance by reconstructing the hand posture on each trial and classified them using linear discriminant analysis to determine how many experimental results could be classified correctly. This let them determine the functions of complicated hand gestures using a small number of variables that represent the various motions. Using combinations that account for the motions, this would let researchers understand how such intricate, complicated motions can be governed by a set of neurons or neurophysiological mechanisms. They tested how similar the results were between humans and monkeys and found monkeys had hand postures that weren’t as specific for different objects.
Finally the researchers classified hand motions using spike counts of neurons in the primary motor cortex to determine which ares of the brain corresponded with different motions. This gave them an overarching picture from neuroscience to behavior in how the brain makes sense of complicated, concerted movements.
Other methods of taking advantage of the probability let scientists create maps that can measure across variables in space. One may use a stochastic process, a family of random variables, on a space of probabilities to choose appropriate parts of the space that correspond with different outcomes. This can be applied from simple axon models of individual neurons to entire neural networks.
Very little is known about exactly how the brain interprets somatosensory activity as a particular tactile sensation. Some initial evidence towards understanding this comes from individuals with brain damage due to stroke. Researchers studying the somatosensory cortex of the brain, the part linked with tactile localization, have shown that hands and forearms tend to mislocalize sensations of touch towards their centers in healthy individuals and those who have had strokes in somatosensory regions. Psychologists Janellen Huttenlocher and Susan Duncan and statistician Larry Hedges proposed that spatial locations bias towrds the middle of categorical spaces and away from boundaries the more uncertain their information is. This method of dealing with uncertainty is another example of how the brain simplifies problems when it needs to.
Flies and humans use coding strategies when processing visual information to create more-refined images. Fly and cone photoreceptors respond to light by adapting their gain to a local mean intensity. This creates a contrast between intensity at the receptor and the local mean. The receptors then code the response to this contrast among all the values of input light. This way, the nervous system simplifies intensities to proportions that are more easily evaluated by the neuron’s response range. It also lets the fly or human identify same objects under different lighting.