Scene parsing via dense recurrent neural networks with attentional selection
Genre
Pre-printDate
2019-03-04Author
Fan, HChu, P
Latecki, LJ
Ling, H
Permanent link to this record
http://hdl.handle.net/20.500.12613/4567
Metadata
Show full item recordDOI
10.1109/WACV.2019.00198Abstract
© 2019 IEEE. Recurrent neural networks (RNNs) have shown the ability to improve scene parsing through capturing long-range dependencies among image units. In this paper, we propose dense RNNs for scene labeling by exploring various long-range semantic dependencies among image units. Different from existing RNN based approaches, our dense RNNs are able to capture richer contextual dependencies for each image unit by enabling immediate connections between each pair of image units, which significantly enhances their discriminative power. Besides, to select relevant dependencies and meanwhile to restrain irrelevant ones for each unit from dense connections, we introduce an attention model into dense RNNs. The attention model allows automatically assigning more importance to helpful dependencies while less weight to unconcerned dependencies. Integrating with convolutional neural networks (CNNs), we develop an end-to-end scene labeling system. Extensive experiments on three large-scale benchmarks demonstrate that the proposed approach can improve the baselines by large margins and outperform other state-of-the-art algorithms.Citation to related work
IEEEHas part
Proceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019ADA compliance
For Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.eduae974a485f413a2113503eed53cd6c53
http://dx.doi.org/10.34944/dspace/4549