Real-time Vision-based Depth Reconstruction with NVidia Jetson
We consider the problem of depth reconstruction from downsampled sparse depth values. We compare our approach with semi-dense depth map interpolation and direct RGB-to-Depth reconstruction solutions on several datasets, including Matterport 3D dataset containing RGB and depth images of 90 building-scale scenes. We demonstrate that the proposed model can produce approximate depth map for over two hundreds images per second.
Depth map super-resolution is a challenging computer vision problem. In this paper, we present two deep convolutional neural networks solving the problem of single depth map super-resolution. Both networks learn residual decomposition and trained with specific perceptual loss improving sharpness and perceptive quality of the upsampled depth map. Several experiments on various depth super-resolution benchmark datasets show state-of-art performance in terms of RMSE, SSIM, and PSNR metrics while allowing us to process depth super-resolution in real time with over 25-30 frames per second rate.
The problem of dense depth map inference from sparse depth values is considered in this paper. We address this issue in situation when one has low-cost sensor data and limited computational resources. We propose a method that performs interpolation and then super-resolution while comparing our approach with the state-of-the-art direct RGB-to-Dense reconstruction solutions. In particular, we use an encoder-decoder model of CNN with loss consisting of standard mean squared error and perceptual loss function. Futhermore, it has been shown that the described approach could be adopted to estimate rough depth map in real-time.
Autonomous driving highly depends on depth information for safe driving. Recently, major improvements have been taken towards improving both supervised and self-supervised methods for depth reconstruction. However, most of the current approaches focus on single frame depth estimation, where quality limit is hard to beat due to limitations of supervised learning of deep neural networks in general. One of the way to improve quality of existing methods is to utilize temporal information from frame sequences. In this paper, we study intelligent ways of integrating recurrent block in common supervised depth estimation pipeline. We propose a novel method, which takes advantage of the convolutional gated recurrent unit (convGRU) and convolutional long short-term memory (convLSTM). We compare use of convGRU and convLSTM blocks and determine the best model for real-time depth estimation task. We carefully study training strategy and provide new deep neural networks architectures for the task of depth estimation from monocular video using information from past frames based on attention mechanism. We demonstrate the efficiency of exploiting temporal information by comparing our best recurrent method with existing image-based and video-based solutions for monocular depth reconstruction.
Computational methods to predict Z-DNA regions are in high demand to understand the functional role of Z-DNA. The previous state-of-the-art method Z-Hunt is based on statistical mechanical and energy considerations about B- to Z-DNA transition using sequence information. Z-DNA CHiP-seq experiment results showed little overlap with Z-Hunt predictions implying that sequence information only is not sufficient to explain emergence of Z-DNA at different genomic locations. Adding epigenetic and other functional genomic mark-ups to DNA sequence level can help revealing the functional Z-DNA sites. Here we take advantage of the deep learning approach that can analyze and extract information from large volumes of molecular biology data. We developed a machine learning approach DeepZ that aggregates information from genome-wide maps of epigenetic markers, transcription factor and RNA polymerase binding sites, and chromosome accessibility maps. With the developed model we not only verify the experimental Z-DNA predictions, but also generate the whole-genome annotation, introducing new possible Z-DNA regions, which have not yet been found in experiments and can be of interest to the researchers from various fields.