The 1st CogMAEC Workshop

Cognition-oriented Multimodal Affective and Empathetic Computing

27-31 October 2025 | Dublin, Ireland | ACM Multimedia 2025

While multimodal systems excel at basic emotion recognition, they struggle to understand why we feel and how emotions evolve. This workshop pioneers cognitive AI that interprets human affect through multimodal context and causal reasoning. Join us in redefining emotional intelligence for healthcare robots, empathetic chatbots, and beyond.

Stay updated: https://CogMAEC.github.io/MM2025

Learn More

About the CogMAEC

Welcome to the 1st CogMAEC Workshop, proudly co-located with ACM Multimedia 2025!

As human-computer interaction evolves, emotional intelligence and empathy are becoming essential capabilities of intelligent systems. The CogMAEC Workshop (Cognition-oriented Multimodal Affective and Empathetic Computing) aims to push the boundaries of traditional affective computing by exploring the next frontier: cognitive emotional understanding.

While previous work in multimodal affective computing has focused on recognizing basic emotions from facial expressions, speech, and text, this workshop sets its sights on deeper challenges — understanding the "why" behind emotions, reasoning over context, and simulating human-like empathetic responses. With the recent advances in Multimodal Large Language Models (MLLMs), the time is ripe to rethink how machines perceive, reason, and respond to human emotions.

CogMAEC'25 brings together researchers and practitioners working on:

  • Traditional Multimodal Affective Computing
  • MLLM-based Multimodal Affective Computing
  • Cognition-oriented Multimodal Affective Computing

The workshop will cover both traditional multimodal emotion recognition techniques and cutting-edge cognition-driven methodologies. We aim to foster meaningful discussion and collaboration at the intersection of affective computing, cognitive modeling, and multimodal AI.

Join us as we collectively reimagine what emotional AI can become — not just smarter, but more human.

All workshop details, schedules, and updates can be found on our website.

Call for Papers

We invite submissions to the workshop in the following three categories:

1. Position or Perspective Papers (4 to 8 pages, excluding references): We encourage forward-looking contributions that present novel ideas, conceptual frameworks, research outlooks, or identify key challenges aligned with the workshop themes.

2. Featured Papers (submission of title, abstract, and the original paper): We welcome impactful papers previously published at top-tier conferences or journals, or well-curated summaries that consolidate significant work relevant to the workshop's focus areas.

3. Demonstration Papers (up to 2 pages, excluding references): We seek short papers introducing prototypes, tools, or systems—whether new or previously published—that showcase practical implementations or evaluation methodologies related to the workshop topics.

All accepted submissions will be included in the ACM MM proceedings. Authors of selected papers will be invited to present their work at the workshop.

A Best Paper Award will be presented to an outstanding submission, with the winner announced during the event.

The workshop welcomes submissions on the following topics (but not limited to):

1) Traditional Multimodal Affective Computing

  • Facial Expression Recognition
  • Speech Emotion Recognition
  • Audio-visual Emotion Recognition
  • Body Gesture Emotion Detection
  • Micro-expression Recognition
  • Multimodal Sentiment Analysis
  • Multimodal Emotion Recognition in Conversation
  • Multimodal Stance Detection
  • Multimodal Emotion Analysis in Memes
  • Multimodal Sarcasm and Irony Detection
  • Cross-cultural Emotion Recognition
  • Physiological Signal-based Emotion Recognition
  • Emotion-aware Dialogue Generation
  • Emotional Speech Synthesis
  • Multimodal Affective Storytelling
  • Affective Music Generation
  • Affective Facial Animation
  • Emotion-controlled Avatar Generation

2) MLLM-based Multimodal Affective Computing

  • Few-shot Emotion Recognition
  • Multimodal Emotion Reasoning
  • Multimodal Affective Hallucination Mitigation
  • Emotion-aware Self-supervised Representation Learning
  • Multimodal Affective In-context Learning
  • Affective Instruction Tuning for MLLMs
  • Multimodal Feature Extraction and Fusion
  • Cross-modal Affective Alignment
  • Cross-domain Affective Transfer Learning
  • Emotion-aware Visual Question Answering
  • Emotion-guided Text-to-Image/Video Generation
  • Multimodal Empathetic Dialogue Systems
  • Persona-driven Emotion-aware Conversational AI

3) Cognition-oriented Multimodal Affective Computing

  • Multimodal Implicit Sentiment Analysis
  • Multimodal Emotion Cause Analysis in Conversations
  • Multimodal Aspect-based Sentiment Analysis
  • Neuro-symbolic Reasoning for Emotion Understanding
  • Theory of Mind-based Empathy Modeling
  • Cognitive Load and Affect Interaction Modeling
  • Cross-modal Cognitive Bias Detection

Important Dates

Workshop Date

October 27-28, 2025 (AoE)

Camera Ready

August 23, 2025 (AoE)

Paper Notification

July 20, 2025 (AoE)

Paper Submission Deadline

June 30, 2025 (AoE)

Paper Submission Start

April 15, 2025 (AoE)

Website Preparation

March 30, 2025 (AoE)

Submission Guidelines

All submissions must be written in English and follow the current ACM two-column conference format. Page limits are inclusive of all content, including figures and appendices. Submissions must be anonymized by the authors for review.

Authors should use the appropriate ACM templates: the "sigconf" LaTeX template or the Interim Word Template, both available on the ACM Proceedings Template page. Alternatively, authors can prepare their submissions using Overleaf's official ACM templates.

Please use \documentclass[sigconf, screen, review, anonymous]{acmart} when preparing your LaTeX manuscript for submission and review.

Invited Speakers

TBD











Organizers

Hao Fei

Hao Fei

National University of Singapore

Bobo Li

Bobo Li

National University of Singapore

Meng Luo

Meng Luo

National University of Singapore

Qian Liu

Qian Liu

University of Auckland

Lizi Liao

Lizi Liao

Singapore Management University

Fei Li

Fei Li

Wuhan University

Min Zhang

Min Zhang

Harbin Institute of Technology (Shenzhen)

Björn W. Schuller

Björn W. Schuller

Imperial College London

Mong-Li Lee

Mong-Li Lee

National University of Singapore

Erik Cambria

Erik Cambria

Nanyang Technological University

Contact

For any questions about the workshop, please contact us through:

Email:

Google Group: https://groups.google.com/g/cogmaec