INVESTIGATING THE EFFECT OF CONTEXT ON COMPARABILITY OF COMPUTERIZED PERFORMANCE-BASED TASKS
Computerized performance-based tasks have the capacity to measure complex constructs, such as 21st-century skills that are difficult to assess using traditional test formats. Broad use of performance-based tasks in educational assessment led to a necessity to construct alternative forms. However, complex skills in these tasks are more likely to be unstable across problem contexts which raise a question about the comparability of alternative forms. In this study, we aimed to check the comparability of two forms of the interactive performance task for four-grade students using the Multigroup Confirmatory Factor Analysis. In the task, students interact with computer-simulated agents to demonstrate cooperation skills in a fantastic or real-life context. The results show that factor structure remains the same for both forms, while the context could affect the difficulty of test forms: fantastic context seemed easier for some cooperation indicators.