Author(s) :
Ramya. G
Conference Name :
International Conference on Modern Trends in Engineering and Management (ICMTEM-25)
Abstract :
Emotion recognition is a crucial aspect of humancomputer interaction, enabling machines to understand and respond to human emotions more effectively. In this paper, we explore state of the art models for multimodal emotion recognition, leveraging textual, auditory, and visual inputs. By integrating these diverse data sources, we develop an ensemble model designed to enhance the accuracy and robustness of emotion detection. Our approach aims to capture intricate emotional cues from speech patterns, facial expressions, and textual content, ensuring a more comprehensive understanding of human emotions. The proposed system aggregates insights from each modality and presents the results in a clear and interpretable manner, facilitating better decision making in real world applications. This research contributes to advancing emotion recognition technologies, with potential applications in healthcare, customer service, and humancomputer interaction.
No. of Downloads :
4