老王论坛

旧版入口
|
English
科研动态
刘畅(硕士生)、董燕妮的论文在INFORMATION FUSION刊出
发布时间:2025-08-29     发布者:易真         审核者:任福     浏览次数:

标题: COMO: Cross-mamba interaction and offset-guided fusion for multimodal object detection

作者: Liu, C (Liu, Chang); Ma, X (Ma, Xin); Yang, XC (Yang, Xiaochen); Zhang, YX (Zhang, Yuxiang); Dong, YN (Dong, Yanni)

来源出版物: INFORMATION FUSION : 125 文献号: 103414 DOI: 10.1016/j.inffus.2025.103414 Early Access Date: JAN 2026 Published Date: 2026 JAN

摘要: Single-modal object detection tasks often experience performance degradation when encountering diverse scenarios. In contrast, multimodal object detection tasks can offer more comprehensive information about object features by integrating data from various modalities. Current multimodal object detection methods generally use various fusion techniques, including conventional neural networks and transformer-based models, to implement feature fusion strategies and achieve complementary information. However, since multimodal images are captured by different sensors, there are often misalignments between them, making direct matching challenging. This misalignment hinders the ability to establish strong correlations for the same object across different modalities. In this paper, we propose a novel approach called the CrOss-Mamba interaction and Offset-guided fusion (COMO) framework for multimodal object detection tasks. The COMO framework employs the cross-mamba technique to formulate feature interaction equations, enabling multimodal serialized state computation. This results in interactive fusion outputs while reducing computational overhead and improving efficiency. Additionally, COMO leverages high-level features, which are less affected by misalignment, to facilitate interaction and transfer complementary information between modalities, addressing the positional offset challenges caused by variations in camera angles and capture times. Furthermore, COMO incorporates a global and local scanning mechanism in the cross-mamba module to capture features with local correlation, particularly in remote sensing images. To preserve low-level features, the offset-guided fusion mechanism ensures effective multiscale feature utilization, allowing the construction of a multiscale fusion data cube that enhances detection performance. The proposed COMO approach has been evaluated on three benchmark multimodal datasets consisting of RGB and infrared image pairs, demonstrating state-of-the-art performance in multimodal object detection tasks. It offers a solution tailored for remote sensing data, making it more applicable to real-world scenarios. The code will be available at //github.com/luluyuu/COMO.

作者关键词: Object detection; Multimodal fusion; Mamba model; Remote sensing

KeyWords Plus: NEURAL-NETWORKS

地址: [Liu, Chang; Dong, Yanni] Wuhan Univ, Sch Resource & Environm Sci, Wuhan 430079, Peoples R China.

[Ma, Xin] Wuhan Univ, State Key Lab Informat Engn Surveying Mapping & Re, Wuhan 430079, Peoples R China.

[Yang, Xiaochen] Univ Glasgow, Sch Math & Stat, Glasgow G12 8QQ, Scotland.

[Zhang, Yuxiang] China Univ Geosci, Sch Geophys & Geomat, Wuhan 430074, Peoples R China.

通讯作者地址: Dong, YN (通讯作者)Wuhan Univ, Sch Resource & Environm Sci, Wuhan 430079, Peoples R China.

电子邮件地址: [email protected]

影响因子:15.5