Weichen Zhang, Dong Xu, Jing Zhang, Wanli Ouyang
The paper introduces Progressive Modality Cooperation (PMC), a framework for multi-modality domain adaptation that effectively transfers knowledge across domains by leveraging multiple modalities, even when some are missing in the target domain.
This research presents a new approach called Progressive Modality Cooperation (PMC) to help computers learn from data that comes in different forms, like images and depth information, and apply that learning to new, but similar data sets. The technique is particularly useful when some types of data are missing in the new data set, as it can generate the missing parts using the available data. This is especially helpful in tasks like recognizing objects in images or videos, where having more types of data can improve accuracy. The method has been tested on various image and video data sets, showing promising results in improving recognition tasks.