?
Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP 2020)
Vol. 4.
SciTePress, 2020.
Academic editor: M. F. Giovanni, R. Petia, B. Jose
Alanov A., Kochurov M., Volkhonskiy D. et al., , in : Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP 2020). Vol. 4.: SciTePress, 2020. P. 214-221.
We propose a novel multi-texture synthesis model based on generative adversarial networks (GANs) with a user-controllable mechanism. The user control ability allows to explicitly specify the texture which should be generated by the model. This property follows from using an encoder part which learns a latent representation for each texture from the dataset. To ensure ...
Added: November 8, 2020
Proceedings of the International conference on computer vision theory and applications (VISAPP 2015)
[б.и.], 2015
Added: April 13, 2016
Соколова А. Д., Savchenko A., В кн. : Сборник трудов IV Международной конференции и молодёжной школы "Информационные технологии и нанотехнологии" (ИТНТ 2018). : Самара : Предприятие "Новая техника", 2018. Гл. 128. С. 946-952.
The task of organizing information in video surveillance systems is implemented by grouping the video tracks, which contain identical faces. We examine aggregation methods for the features of individual frames extracted using deep convolutional neural networks. The tracks with identical faces are grouped based on known face verification algorithms and clustering methods. Experimental study on ...
Added: October 18, 2018
Tarasov Alexander V., Savchenko A., , in : Proceedings of Analysis of Images, Social Networks and Texts – 7th International Conference, AIST 2018, Moscow, Russia, July 5-7, 2018, Revised Selected Papers. Lecture Notes in Computer Science. Vol. 11179.: Berlin : Springer, 2018. Ch. 19. P. 191-198.
In this paper we address the group-level emotion classification problem in video analytic systems.We propose to apply the MTCNN face detector to obtain facial regions on each video frame. Next, off-the-shelf image features are extracted from each located face using preliminary trained convolutional neural networks. The features of the whole frame are computed as a ...
Added: December 12, 2018