• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 9
Sort:
by name
by year
Article
Utochkin I. S., Khvostov V., Stakina Y. Cognition. 2018. Vol. 179. P. 178-191.

Although objects around us vary in a number of continuous dimensions (color, size, orientation, etc.), we tend to perceive the objects using more discrete, categorical descriptions (e.g., berries and leaves). Previously, we described howcontinuous ensemblestatistics ofsimple featuresaretransformedinto categorical classes: Thevisual system tests whether the feature distribution has one or several peaks, each representing a likely “category”. Here, we tested the mechanism of segmentation for more complex conjunctions of features. Observers discriminated between two textures filled with lines of various lengths and orientations, which had same distributions between the textures, but opposite directions of correlations. Critically, feature distributions could be “segmentable” (only extreme feature values and a large gap between them) or “non-segmentable” (both extreme and middle values with smooth transition are present). Segmentable displays yielded steeper psychometric functions indicating better discrimination (Experiment 1). The effect of segmentability arises early in visual processing (Experiment 2) and is likely to be provided by global sampling of the entire field (Experiment 3). Also, rapid segmentation requires both feature dimensions having a “segmentable” distribution supporting division of the textures into categorical classes of conjunctions. We propose that observers select items from one side (peak) of one dimension and sample mean differences along a second dimension within the selected subset. In this scenario, subset selection is alimiting factor (Experiment 4) oftexture discrimination. Yet, segmentability provided by the sharp feature distributions seems to facilitate both subset selection and mean comparison.

Added: Jun 25, 2018
Article
Kristjansson A. Cognition. 2020. Vol. 194. P. 1-13.

Visual search tasks play a key role in theories of visual attention. But single-target search tasks may provide only a snapshot of attentional orienting. Foraging tasks with multiple targets of different types arguably provide a closer analogy to everyday attentional processing. Set-size effects have in the literature formed the basis for inferring how attention operates during visual search. We therefore measured the effects of absolute set-size (constant target-distractor ratio) and relative set-size (constant set-size but target-distractor ratio varies) on foraging patterns during “feature” foraging (targets differed from distractors on a single feature) and “conjunction” foraging (targets differed from distractors on a combination of two features). Patterns of runs of same target-type selection were similar regardless of whether absolute or relative set-size varied: long sequential runs during conjunction foraging but rapid switching between target types during feature foraging. But although foraging strategies differed between feature and conjunction foraging, surprisingly, intertarget times throughout foraging trials did not differ much between the conditions. Typical response time by set-size patterns for single-target visual search tasks were only observed for the last target during foraging. Furthermore, the foraging patterns within trials involved several distinct phases, that may serve as markers of particular attentional operations. Foraging tasks provide a remarkably intricate picture of attentional selection, far more detailed than traditional single-target visual search tasks, and well-known theories of visual attention have difficulty accounting for key aspects of the observed foraging patterns. Finally, we discuss how theoretical conceptions of attention could be modified to account for these effects.

Added: May 29, 2020
Article
Kristjansson A., Thornton I. M., Kristjansson T. et al. Cognition. 2020. Vol. 194. P. 1-13.

Visual search tasks play a key role in theories of visual attention. But single-target search tasks may provide only a snapshot of attentional orienting. Foraging tasks with multiple targets of different types arguably provide a closer analogy to everyday attentional processing. Set-size effects have in the literature formed the basis for inferring how attention operates during visual search. We therefore measured the effects of absolute set-size (constant target-distractor ratio) and relative set-size (constant set-size but target-distractor ratio varies) on foraging patterns during “feature” foraging (targets differed from distractors on a single feature) and “conjunction” foraging (targets differed from distractors on a combination of two features). Patterns of runs of same target-type selection were similar regardless of whether absolute or relative set-size varied: long sequential runs during conjunction foraging but rapid switching between target types during feature foraging. But although foraging strategies differed between feature and conjunction foraging, surprisingly, intertarget times throughout foraging trials did not differ much between the conditions. Typical response time by set-size patterns for single-target visual search tasks were only observed for the last target during foraging. Furthermore, the foraging patterns within trials involved several distinct phases, that may serve as markers of particular attentional operations. Foraging tasks provide a remarkably intricate picture of attentional selection, far more detailed than traditional single-target visual search tasks, and well-known theories of visual attention have difficulty accounting for key aspects of the observed foraging patterns. Finally, we discuss how theoretical conceptions of attention could be modified to account for these effects.

Added: Jun 24, 2020
Article
Scheepers C., Galkina A. I., Shtyrov Y. et al. Cognition. 2019. Vol. 189. P. 155-166.

A number of recent studies found evidence for shared structural representations across different cognitive domains such as mathematics, music, and language. For instance, Scheepers et al. (2011) showed that English speakers’ choices of relative clause (RC) attachments in partial sentences like The tourist guide mentioned the bells of the church that … can be influenced by the structure of previously solved prime equations such as 80–(9 + 1) × 5 (making high RC-attachments more likely) versus 80–9 + 1 × 5 (making low RC-attachments more likely). Using the same sentence completion task, Experiment 1 of the present paper fully replicated this cross-domain structural priming effect in Russian, a morphologically rich language. More interestingly, Experiment 2 extended this finding to more complex three-site attachment configurations and showed that, relative to a structurally neutral baseline prime condition, N1-, N2-, and N3-attachments of RCs in Russian were equally susceptible to structural priming from mathematical equations such as 18+(7+(3 + 11)) × 218 + 7+(3 + 11) × 2, and 18 + 7 + 3 + 11 × 2, respectively. The latter suggests that cross-domain structural priming from mathematics to language must rely on detailed, domain-general representations of hierarchical structure.

Added: Feb 12, 2020
Article
Vukovic N., Williams J. N. Cognition. 2015. No. 142.

The factors that contribute to perceptual simulation during sentence comprehension remain underexplored. Extant research on perspective taking in language has largely focused on linguistic constraints, such as the role of pronouns in guiding perspective adoption. In the present study, we identify preferential usage of egocentric and allocentric reference frames in individuals, and test the two groups on a standard sentence-picture verification task. Across three experiments, we show that individual biases in spatial reference frame adoption observed in non-linguistic tasks influence visual simulation of perspective in language. Our findings suggest that typically reported grand-averaged effects may obscure important between-subject differences, and support proposals arguing for representational pluralism, where perceptual information is integrated dynamically and in a way that is sensitive to contextual and especially individual constraints. © 2015 Elsevier B.V.

Added: Aug 13, 2015
Article
Kristjansson A., Campana G., Chetverikov A. Cognition. 2020. Vol. 196. P. 1-7.

Our interactions with the visual world are guided by attention and visual working memory. Things that we look for and those we ignore are stored as templates that reflect our goals and the tasks at hand. The nature of such templates has been widely debated. A recent proposal is that these templates can be thought of as probabilistic representations of task-relevant features. Crucially, such probabilistic templates should accurately reflect feature probabilities in the environment. Here we ask whether observers can quickly form a correct internal model of a complex (bimodal) distribution of distractor features. We assessed observers’ representations by measuring the slowing of visual search when target features unexpectedly match a distractor template. Distractor stimuli were heterogeneous, randomly drawn on each trial from a bimodal probability distribution. Using two targets on each trial, we tested whether observers encode the full distribution, only one peak of it, or the average of the two peaks. Search was slower when the two targets corresponded to the two modes of a previous distractor distribution than when one target was at one of the modes and another between them or outside the distribution range. Furthermore, targets on the modes were reported later than targets between the modes that, in turn, were reported later than targets outside this range. This shows that observers use a correct internal model, representing both distribution modes using templates based on the full probability distribution rather than just one peak or simple summary statistics. The findings further confirm that performance in odd-one out search with repeated distractors cannot be described by a simple decision rule. Our findings indicate that probabilistic visual working memory templates guiding attention, dynamically adapt to task requirements, accurately reflecting the probabilistic nature of the input.

Added: Jun 24, 2020
Article
Martinovic J., Paramei G., MacInnes W. Cognition. 2020. Vol. 201. P. 104281.

Chromatic stimuli across a boundary of basic color categories (BCCs; eg blue and green) are discriminated faster than colorimetrically equidistant colors within a given category. Russian has two BCCs for blue, sinij 'dark blue' and goluboj 'light blue'. These language-specific BCCs were reported to enable native Russian speakers to discriminate cross-boundary dark and light blues faster than English speakers (Winawer et al., 2007, PNAS, 4, 7780-7785). We re-evaluated this finding in two experiments that employed identical tasks as in the cited study. In Experiment 1, Russian and English speakers categorized colors as sinij / goluboj or dark blue / light blue respectively; this was followed by a color discrimination task. In experiment 2 Russian speakers initially performed the discrimination task on sinij / goluboj and goluboj / zelënyj 'green' sets. They then categorized these colors in three frequency contexts with each stimulus presented: (i) an equal number of times (unbiased); more frequent (ii) either sinij or goluboj; (iii) either goluboj or zelënyj. We observed a boundary response speed advantage for goluboj / zelënyj but not for sinij / goluboj. The frequency bias affected only the sinij / goluboj boundary such that in a lighter context, the boundary shifted towards lighter shades, and vice versa. Contrary to previous research, our results show that in Russian, stimulus discrimination at the lightnessdefined blue BCC boundary is not reflected in processing speed. The sinij / goluboj boundary did have a sharper categorical transition than the dark blue / light blue boundary,

Added: Apr 6, 2020
Article
Miozzo M., Petrova A., Fischer-Baum S. et al. Cognition. 2016. Vol. 154. P. 69-80.

Reduced short-term memory (STM) capacity has been reported for sign as compared to speech when items have to be recalled in a specific order. This difference has been attributed to a more precise and efficient serial position encoding in verbal STM (used for speech) than visuo-spatial STM (used for sign). We tested in the present investigation whether the reduced STM capacity with signs stems from a lack of positional encoding available in verbal STM. Error analyses reported in prior studies have revealed that positions are defined in verbal STM by distance from both the start and the end of the sequence (both-edges positional encoding scheme). Our analyses of the errors made by deaf participants with finger-spelled letters revealed that the both-edges positional encoding scheme underlies the STM representation of signs. These results indicate that the cause of the STM disadvantage is not the type of positional encoding but rather the difficulties in binding an item in visuo-spatial STM to its specific position in the sequence. Both-edges positional encoding scheme could be specific of sign, since it has not been found in visuo-spatial STM tasks conducted with hearing participants.

Added: Nov 1, 2018
Article
Kristjansson A., Sigurdardottir H. M., Fridriksdottir L. E. et al. Cognition. 2018. Vol. 175. P. 157-168.

Evidence of interdependencies of face and word processing mechanisms suggest possible links between reading problems and abnormal face processing. In two experiments we assessed such high-level visual deficits in people with a history of reading problems. Experiment 1 showed that people who were worse at face matching had greater reading problems. In experiment 2, matched dyslexic and typical readers were tested, and difficulties with face matching were consistently found to predict dyslexia over and above both novel-object matching as well as matching noise patterns that shared low-level visual properties with faces. Furthermore, ADHD measures could not account for face matching problems. We speculate that reading difficulties in dyslexia are partially caused by specific deficits in high-level visual processing, in particular for visual object categories such as faces and words with which people have extensive experience.

Added: Jun 23, 2020