A Human Intention and Motion Prediction Framework for Applications in Human-Centric Digital Twins

Usman Asad, Azfar Khalid, Waqas Akbar Lughmani, Shummaila Rasheed, Muhammad Mahabat Khan

    Research output: Contribution to journalArticlepeer-review

    Abstract

    In manufacturing settings where humans and machines collaborate, understanding and predicting human intention is crucial for enabling the seamless execution of tasks. This knowledge is the basis for creating an intelligent, symbiotic, and collaborative environment. However, current foundation models often fall short in directly anticipating complex tasks and producing contextually appropriate motion. This paper proposes a modular framework that investigates strategies for structuring task knowledge and engineering context-rich prompts to guide Vision–Language Models in understanding and predicting human intention in semi-structured environments. Our evaluation, conducted across three use cases of varying complexity, reveals a critical tradeoff between prediction accuracy and latency. We demonstrate that a Rolling Context Window strategy, which uses a history of frames and the previously predicted state, achieves a strong balance of performance and efficiency. This approach significantly outperforms single-image inputs and computationally expensive in-context learning methods. Furthermore, incorporating egocentric video views yields a substantial 10.7% performance increase in complex tasks. For short-term motion forecasting, we show that the accuracy of joint position estimates is enhanced by using historical pose, gaze data, and in-context examples.
    Original languageEnglish
    JournalBiomimetics
    Volume10
    Issue number10
    DOIs
    Publication statusPublished (VoR) - 1 Oct 2025

    Fingerprint

    Dive into the research topics of 'A Human Intention and Motion Prediction Framework for Applications in Human-Centric Digital Twins'. Together they form a unique fingerprint.

    Cite this