Abstract :
The identification of emotions has become a significant research area in the current societal landscape, owing to its numerous applications. Facial expression recognition (FER) is one of them and is crucial in conveying emotional information about people. In recent years, the popularity of facial emotion recognition has grown, driven by the advancement of artificial intelligence. Usually, face pre-processing, including facial detection and alignment, contributes to the increased complexity of the facial emotion recognition (FER) classification process. However, in this paper, we simplify the process by concurrently addressing identification, recognition, and classification. Initially, the CK Plus dataset underwent manual annotation. Subsequently, facial expressions were analyzed using an endto-end Facial Emotion Recognition (FER) network named FER using YOLOv8. The architecture of FER using YOLO is derived from YOLOv8. The proposed model exhibits exceptional accuracy on the CK Plus and HFER datasets, with experimental findings revealing high detection performance (mAP50) ranging from 0.974 to 0.989 on the test datasets. Our method ensures both high accuracy and swift inference for Facial Emotion Recognition, demonstrated through real-time image testing with accurate results. Additionally, the model is validated for realtime FER using spontaneous images from a camera, showcasing its robust performance in dynamic scenarios.