Pose-based Deep Gait Recognition
Human gait or walking manner is a biometric feature that allows identification of a person when other biometric features such as the face or iris are not visible. In this study, the authors present a new pose-based convolutional neural network model for gait recognition. Unlike many methods that consider the full-height silhouette of a moving person, they consider the motion of points in the areas around human joints. To extract motion information, they estimate the optical flow between consecutive frames. They propose a deep convolutional model that computes pose-based gait descriptors. They compare different network architectures and aggregation methods and experimentally assess various body parts to determine which are the most important for gait recognition. In addition, they investigate the generalisation ability of the developed algorithms by transferring them between datasets. The results of these experiments show that their approach outperforms state-of-the-art methods.
The article is devoted to the history and problems of creating interfaces. Shows the complexity and importance of effective interfaces, noted that this problem is a system of multilevel interdisciplinary. The new systems should be given serious attention to issues of human efficiency level. Man is still the leading element in determining the efficiency of any ergatic system. The main means of control in ergatic systems including computers, is the graphic manipulator (GM), with which to control the on-screen controls. Are the main styles of user interface. The most popular are GUI-interface (GUI - GraphicalUserInterface) and based on them WUI-interface (WUI-WebUserInterface). The development of equipment and technology of computer modeling led to the active introduction of virtual reality technology to ensure the inclusion of people in artificial worlds. Their main feature - full control of all the parameters of the development and the emergence of a sense of presence in people who live in these environments, which are called immersive. Technology induced environments allow a number of new, not generally applicable to the present, of interfaces using specially engineered virtual environments. Much attention is paid to creating the most advanced systems - systems contact management, which are the camera and sophisticated software. The drawbacks of modern non-contact control. Is being developed to create a contactless intelligent interface, which will allow: to control with data from a video camera, which is installed on your computer have a high noise immunity, clearly identify the user to recognize the situational environment, have an acceptable cost.
The volume contains the abstracts of the 12th International Conference "Intelligent Data Processing: Theory and Applications". The conference is organized by the Russian Academy of Sciences, the Federal Research Center "Informatics and Control" of the Russian Academy of Sciences and the Scientific and Coordination Center "Digital Methods of Data Mining". The conference has being held biennially since 1989. It is one of the most recognizable scientific forums on data mining, machine learning, pattern recognition, image analysis, signal processing, and discrete analysis. The Organizing Committee of IDP-2018 is grateful to Forecsys Co. and CFRS Co. for providing assistance in the conference preparation and execution. The conference is funded by RFBR, grant 18-07-20075. The conference website http://mmro.ru/en/.
In this paper, we take up the long-standing problem of how to recover 3-D shapes represented by a 2-D image, such as the image on the retina of the eye, or in a video camera. Our approach is biologically grounded in a theory of how the human visual system solves this problem, focusing on shapes that are mirror symmetrical in 3-D. A 3-D mirror-symmetrical shape can be recovered from a single 2-D orthographic or perspective image by applying several a priori constraints: 3-D mirror symmetry, 3-D compactness, and planarity of contours. From the computational point of view, the application of a 3-D symmetry constraint is challenging because it requires establishing 3-D symmetry correspondence among features of a 2-D image, which itself is asymmetrical for almost all viewing directions relative to the 3-D symmetrical shape. We describe new invariants of a 3-D to 2-D projection for the case of a pair of mirror-symmetrical planar contours, and we formally state and prove the necessary and sufficient conditions for detection of this type of symmetry in a single orthographic and perspective image.
We consider the problem of estimating 3-d structure from a single still image of an outdoor urban scene. Our goal is to efficiently create 3-d models which are visually pleasant. We chose an appropriate 3-d model structure and formulate the task of 3-d reconstruction as model fitting problem. Our 3-d models are composed of a number of vertical walls and a ground plane, where ground-vertical boundary is a continuous polyline. We achieve computational efficiency by special preprocessing together with stepwise search of 3-d model parameters dividing the problem into two smaller sub-problems on chain graphs. The use of Conditional Random Field models for both problems allows to various cues. We infer orientation of vertical walls of 3-d model vanishing points.
This work addresses the problem of video matting, that is extracting the opacity-layer of a foreground object from a video sequence. We introduce the notion of alpha-flow which corresponds to the flow in the opacity layer. The idea is derived from the process of rotoscoping, where a user-supplied object mask is smoothly interpolated between keyframes while preserving its correspondence with the underlying image. Our key contribution is an algorithm which infers both the opacity masks and the alpha-flow in an efficient and unified manner. We embed our algorithm in an interactive video matting system where the first and last frame of a sequence are given as keyframes, and additional user strokes may be provided in intermediate frames. We show high quality results on various challenging sequences, and give a detailed comparison to competing techniques.
Most of today’s machine learning techniques requires large manually labeled data. This problem can be solved by using synthetic images. Our main contribution is to evaluate methods of traffic sign recognition trained on synthetically generated data and show that results are comparable with results of classifiers trained on real dataset. To get a representative synthetic dataset we model different sign image variations such as intra-class variability, imprecise localization, blur, lighting, and viewpoint changes. We also present a new method for traffic sign segmentation, based on a nearest neighbor search in the large set of synthetically generated samples, which improves current traffic sign recognition algorithms.