CABiNet: Efficient context aggregation network for low-latency semantic segmentation

Saumya Kumaar, Ye Lyu, F. Nex, Michael Ying Yang

Research output: Working paperPreprintAcademic

133 Downloads (Pure)


With the increasing demand of autonomous machines, pixel-wise semantic segmentation for visual scene understanding needs to be not only accurate but also efficient for any potential real-time applications. In this paper, we propose CABiNet (Context Aggregated Bi-lateral Network), a dual branch convolutional neural network (CNN), with significantly lower computational costs as compared to the state-of-the-art, while maintaining a competitive prediction accuracy. Building upon the existing multi-branch architectures for high-speed semantic segmentation, we design a cheap high resolution branch for effective spatial detailing and a context branch with light-weight versions of global aggregation and local distribution blocks, potent to capture both long-range and local contextual dependencies required for accurate semantic segmentation, with low computational overheads. Specifically, we achieve 76.6% and 75.9% mIOU on Cityscapes validation and test sets respectively, at 76 FPS on an NVIDIA RTX 2080Ti and 8 FPS on a Jetson Xavier NX. Codes and training models will be made publicly available.
Original languageEnglish
Number of pages8
Publication statusPublished - 2 Nov 2020

Publication series
PublisherCornell University


  • cs.CV
  • cs.RO


Dive into the research topics of 'CABiNet: Efficient context aggregation network for low-latency semantic segmentation'. Together they form a unique fingerprint.
  • CABiNet: Efficient Context Aggregation Network for Low-Latency Semantic Segmentation

    Kumaar, S., Lyu, Y., Nex, F. & Yang, M. Y., 18 Oct 2021, 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, p. 13517-13524 8 p. (Proceedings - IEEE International Conference on Robotics and Automation; vol. 2021-May).

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Open Access
    26 Citations (Scopus)
    207 Downloads (Pure)

Cite this