Representing color and orientation ensembles: Can observers learn multiple feature distributions?
Objects have a variety of different features that can be represented as probability distributions. Recent findings show that in addition to mean and variance, the visual system can also encode the shape of feature distributions for features like color or orientation. In an odd-one-out search task we investigated observers' ability to encode two feature distributions simultaneously. Our stimuli were defined by two distinct features (color and orientation) while only one was relevant to the search task. We investigated whether the irrelevant feature distribution influences learning of the task-relevant distribution and whether observers also encode the irrelevant distribution. Although considerable learning of feature distributions occurred, especially for color, our results also suggest that adding a second irrelevant feature distribution negatively affected the encoding of the relevant one and that little learning of the irrelevant distribution occurred. There was also an asymmetry between the two different features: Searching for the oddly oriented target was more difficult than searching for the oddly colored target, which was reflected in worse learning of the color distribution. Overall, the results demonstrate that it is possible to encode information about two feature distributions simultaneously but also reveal considerable limits to this encoding.