Scene parsing via dense recurrent neural networks with attentional selection
Permanent link to this recordhttp://hdl.handle.net/20.500.12613/4567
MetadataShow full item record
Abstract© 2019 IEEE. Recurrent neural networks (RNNs) have shown the ability to improve scene parsing through capturing long-range dependencies among image units. In this paper, we propose dense RNNs for scene labeling by exploring various long-range semantic dependencies among image units. Different from existing RNN based approaches, our dense RNNs are able to capture richer contextual dependencies for each image unit by enabling immediate connections between each pair of image units, which significantly enhances their discriminative power. Besides, to select relevant dependencies and meanwhile to restrain irrelevant ones for each unit from dense connections, we introduce an attention model into dense RNNs. The attention model allows automatically assigning more importance to helpful dependencies while less weight to unconcerned dependencies. Integrating with convolutional neural networks (CNNs), we develop an end-to-end scene labeling system. Extensive experiments on three large-scale benchmarks demonstrate that the proposed approach can improve the baselines by large margins and outperform other state-of-the-art algorithms.
Citation to related workIEEE
Has partProceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019
ADA complianceFor Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact firstname.lastname@example.org