TokenHSI

Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization

Liang Pan1,2,    Zeshi Yang3,    Zhiyang Dou2,    Wenjia Wang2,    Buzhen Huang4,
Bo Dai2,5,    Taku Komura2,    Jingbo Wang1,†
1Shanghai AI Laboratory,    2The University of Hong Kong,    3Independent Researcher,
4Southeast University,    5Feeling AI

(†: corresponding author)

CVPR 2025
🏆️ Oral Presentation (Top 3.3%)
Your Image

Introducing TokenHSI, a unified model that enables physics-based characters to perform diverse human-scene interaction tasks.



Abstract

Synthesizing diverse and physically plausible Human-Scene Interactions (HSI) is pivotal for both computer animation and embodied AI. Despite encouraging progress, current methods mainly focus on developing separate controllers, each specialized for a specific interaction task. This significantly hinders the ability to tackle a wide variety of challenging HSI tasks that require the integration of multiple skills, e.g., sitting down while carrying an object. To address this issue, we present TokenHSI, a single, unified transformer-based policy capable of multi-skill unification and flexible adaptation. The key insight is to model the humanoid proprioception as a separate shared token and combine it with distinct task tokens via a masking mechanism. Such a unified policy enables effective knowledge sharing across skills, thereby facilitating the multi-task training. Moreover, our policy architecture supports variable length inputs, enabling flexible adaptation of learned skills to new scenarios. By training additional task tokenizers, we can not only modify the geometries of interaction targets but also coordinate multiple skills to address complex tasks. The experiments demonstrate that our approach can significantly improve versatility, adaptability, and extensibility in various HSI tasks.



Pipeline

Your Image

TokenHSI consists of two stages:
(left) foundational skill learning and (right) policy adaptation.



Foundational Skill Learning


TokenHSI excels at seamlessly unifying multiple foundational HSI skills within a single transformer.


Path-following


Sitting

Climbing

Carrying



Policy Adaptation


The learned skills can be flexibly and efficiently adapted to challenging new tasks through our transformer-based policy adaptation.

(1) Skill Composition


We train a new task tokenizer to combine each of path-following, sitting, and climbing with carrying to create new composite skills.

(2) Object Shape Variation


We fine-tune the task tokenizer (previously trained for box-carrying) to generalize it to more objects, such as chairs and tables.

(3) Terrain Shape Variation


We introduce a new height map tokenizer to enable the humanoid to perform path-following and carrying tasks on uneven terrain.

(4) Long-horizon Task Completion in a Complex Dynamic Environment


We jointly fine-tune multiple task tokenizers to tackle challenges in long-horizon tasks, such as skill transition and collision avoidance.

BibTeX

@inproceedings{pan2025tokenhsi,
      title={TokenHSI: Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization},
      author={Pan, Liang and Yang, Zeshi and Dou, Zhiyang and Wang, Wenjia and Huang, Buzhen and Dai, Bo and Komura, Taku and Wang, Jingbo},
      booktitle={CVPR},
      year={2025},
}
Please also consider citing the following papers that inspired TokenHSI.
@article{tessler2024maskedmimic,
      title={Maskedmimic: Unified physics-based character control through masked motion inpainting},
      author={Tessler, Chen and Guo, Yunrong and Nabati, Ofir and Chechik, Gal and Peng, Xue Bin},
      journal={ACM Transactions on Graphics (TOG)},
      volume={43},
      number={6},
      pages={1--21},
      year={2024},
      publisher={ACM New York, NY, USA}
}
@article{he2024hover,
      title={Hover: Versatile neural whole-body controller for humanoid robots},
      author={He, Tairan and Xiao, Wenli and Lin, Toru and Luo, Zhengyi and Xu, Zhenjia and Jiang, Zhenyu and Kautz, Jan and Liu, Changliu and Shi, Guanya and Wang, Xiaolong and others},
      journal={arXiv preprint arXiv:2410.21229},
      year={2024}
}