EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views

1University of Science and Technology of China 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center

EgoChoir takes egocentric frames and head motion from head-mounted devices, along with the 3D object, to capture 3D interaction regions, including human contact and object affordance. The human motion is just visualized for intuitive observation of contact, yet it is not utilized by EgoChoir.

Abstract

Understanding egocentric human-object interaction (HOI) is a fundamental aspect of human-centric perception, facilitating applications like AR/VR and embodied AI. For the egocentric HOI, in addition to perceiving semantics e.g., ''what'' interaction is occurring, capturing ''where'' the interaction specifically manifests in 3D space is also crucial, which links the perception and operation. Existing methods primarily leverage observations of HOI to capture interaction regions from an exocentric view. However, incomplete observations of interacting parties in the egocentric view introduce ambiguity between visual observations and interaction contents, impairing their efficacy. From the egocentric view, humans integrate the visual cortex, cerebellum, and brain to internalize their intentions and interaction concepts of objects, allowing for the pre-formulation of interactions and making behaviors even when interaction regions are out of sight. In light of this, we propose harmonizing the visual appearance, head motion, and 3D object to excavate the object interaction concept and subject intention, jointly inferring 3D human contact and object affordance from egocentric videos. To achieve this, we present EgoChoir, which links object structures with interaction contexts inherent in appearance and head motion to reveal object affordance, further utilizing it to model human contact. Additionally, a gradient modulation is employed to adopt appropriate clues for capturing interaction regions across various egocentric scenarios. Moreover, 3D contact and affordance are annotated for egocentric videos collected from Ego-Exo4D and GIMO to support the task. Extensive experiments on them demonstrate the effectiveness and superiority of EgoChoir.



Egocentric body interactions

The human motion is just visualized for intuitive observation of contact, yet it is not utilized by EgoChoir.


Egocentric hand interactions



Comparsion with LEMON



Interaction with multiple objects

Please zoom in for better visualization, especially the contact with hands.


Dynamic Affordance

As the interaction content varies, the object affordance changes, such as grasp and cut of the knife.

As the interaction content varies, the object affordance changes, such as grasp and pour of the kettle.

Method Overview

EgoChoir first employs modality-wise encoders to extract features, in which the motion encoder is pre-trained by minimizing the distance between visual disparity and motion disparity. Then, it takes them to excavate the object interaction concept and subject intention, modeling the affordance and contact through parallel cross-attention with gradient modulation.

Data Annotation

Annotation of 3D human contact and object affordance. (a) Annotate contact for data in Ego-Exo4D. (b) Contact annotation for GIMO dataset, including calculations and manual refinement. (c) 3D object affordance annotation, with the red region denoting that with higher interaction probability, while the blue region indicates the adjacent propagable region.

BibTeX


            @article{yang2024egochoir,
              title={EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views},
              author={Yang, Yuhang and Zhai, Wei and Wang, Chengfeng and Yu, Chengjun and Cao, Yang and Zha, Zheng-Jun},
              journal={arXiv preprint arXiv:2405.13659},
              year={2024}
            }