Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Abril Cabrera, Juliana Nicole"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    Reconocimiento de emociones utilizando señales cerebrales recogidas a través de Interfaces cerebro-computador para la computación afectiva mediante aprendizaje automático
    (Universidad de Cuenca, 2025-09-23) Abril Cabrera, Juliana Nicole; Granda Salamea, Camila Verónica; Cedillo Orellana, Irene Priscila; Auquilla Sangolquí, Andrés Vinicio
    Affective computing aims to develop systems capable of recognizing and responding empathetically to human emotions, with the goal of enhancing human-computer interaction. In this context, brain signals such as EEG stand out as a more precise and objective means of identifying emotional states, as they are less susceptible to conscious manipulation or subjective bias. However, the practical implementation of emotion recognition models faces significant challenges, including reliance on expensive proprietary software that is difficult to replicate and performs inconsistently across different contexts or devices. Additionally, available public databases vary in terms of channel count, sampling frequency, and acquisition protocols, which hinders model generalization. To address these limitations, this study proposes a Machine Learning model for emotion recognition using EEG signals, based on the open-source OpenBCI Cyton + Daisy device. An experimental protocol was designed involving 37 participants exposed to audiovisual stimuli, with emotional responses labeled using the SAM questionnaire. The EEG signals underwent rigorous preprocessing, followed by training and evaluation of classification models such as SVM, RF, MLP, and Transformers in both subject-dependent and subject-independent scenarios. The results demonstrate competitive and consistent performance, validating the feasibility of this approach as an accessible, reproducible, and low-cost alternative in the field of emotion recognition.

DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback