Learning Occlusion-Robust Pedestrian Representations via Uncertainty-Guided Feature Pruning
DOI:
https://doi.org/10.71465/fapm658Keywords:
Pedestrian re-identification, occlusion handling, uncertainty-aware attention, autonomous driving, visual perceptionAbstract
Occlusion and background interference remain major challenges for pedestrian re-identification in urban traffic environments. Inspired by uncertainty-aware CLIP-based frameworks, this paper introduces an uncertainty-guided feature selection mechanism that adjusts the contribution of local visual regions and semantic cues according to their estimated reliability. The proposed method is evaluated on two autonomous driving datasets with both real-world and synthetic occlusion patterns, covering occlusion ratios from 20% to 60%. Comparisons are conducted against attention-based and part-based ReID methods, including PCB, OSNet, and transformer-based attention models. The proposed approach achieves mAP improvements ranging from 4.5% to 6.2% under severe occlusion conditions, while maintaining comparable performance in fully visible settings.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Lukas M. Schneider, Anna K. Vogel, Tobias R. Weber (Author)

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.