Empathic Prompting: Non-Verbal Context Integration for Multimodal LLM Conversations

Reading time: 1 minute
...

📝 Original Info

  • Title: Empathic Prompting: Non-Verbal Context Integration for Multimodal LLM Conversations
  • ArXiv ID: 2510.20743
  • Date: 2025-10-23
  • Authors: ** - 논문에 명시된 저자 정보가 제공되지 않았습니다. (추후 원문에서 확인 필요) **

📝 Abstract

We present Empathic Prompting, a novel framework for multimodal human-AI interaction that enriches Large Language Model (LLM) conversations with implicit non-verbal context. The system integrates a commercial facial expression recognition service to capture users' emotional cues and embeds them as contextual signals during prompting. Unlike traditional multimodal interfaces, empathic prompting requires no explicit user control; instead, it unobtrusively augments textual input with affective information for conversational and smoothness alignment. The architecture is modular and scalable, allowing integration of additional non-verbal modules. We describe the system design, implemented through a locally deployed DeepSeek instance, and report a preliminary service and usability evaluation (N=5). Results show consistent integration of non-verbal input into coherent LLM outputs, with participants highlighting conversational fluidity. Beyond this proof of concept, empathic prompting points to applications in chatbot-mediated communication, particularly in domains like healthcare or education, where users' emotional signals are critical yet often opaque in verbal exchanges.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut