This paper studies learning in strategic environment using experimental data from the Rock-Paper-Scissors game. In a repeated game framework, we explore the response of human subjects to uncertain behavior of strategically sophisticated opponent. We model this opponent as a robot who played a stationary strategy with superimposed noise varying across four experimental treatments. Using experimental data from 85 subjects playing against such a stationary robot for 100 periods, we show that humans can decode their strategies, on average outperforming the random response to such a robot by 17%. Further, we show that human ability to recognize such strategies decreases with exogenous noise in the behavior of the robot. Further, we fit learning data to classical Reinforcement Learning (RL) and Fictitious Play (FP) models and show that the classic action-based approach to learning is inferior to the strategy-based one. Unlike the previous papers in this field, e.g. Ioannou, Romero (2014), we extend and adapt the strategy-based learning techniques to the 3x3 game. We also show, using a combination of experimental and ex-post survey data, that human participants are better at learning separate components of an opponent's strategy than in recognizing this strategy as a whole. This decomposition offers them a shorter and more intuitive way to figure out their own best response. We build a strategic extension of the classical learning models accounting for these behavioral phenomena.
Using a simplified multistage bidding model with asymmetrically informed agents, De Meyer and Saley  demonstrated an idea of endogenous origin of the Brownian component in the evolution of prices on stock markets: random price fluctuations may be caused by strategic randomization of “insiders.” The model is reduced to a repeated game with incomplete information. This paper presents a survey of numerous researches inspired by the pioneering publication of De Meyer and Saley.