Open Access
Issue
MATEC Web of Conferences
Volume 61, 2016
The International Seminar on Applied Physics, Optoelectronics and Photonics (APOP 2016)
Article Number 03012
Number of page(s) 4
Section Chapter 3 Information Security and Computer Science
DOI https://doi.org/10.1051/matecconf/20166103012
Published online 28 June 2016
  1. L.L. Yu, Z.X. Cai, M.Y. Chens, Study on emotion feature analysis and recognition in speech signal: an overview, J. Circuits and Systems, 12,4 (2007): 76–84. [Google Scholar]
  2. Y.H. Yan, Y. Thou, Y.Q. Sun, Feature Extraction Method for Speech Emotion Recognition. China: 2010102729713 (2010). [Google Scholar]
  3. X. Mao, L.J. Chen, Speech emotion recognition based on parametric filter and fractal dimension, IEICE T INF SYST, 93,8 (2010): 2324–2326. [CrossRef] [Google Scholar]
  4. C.R. Zou, L. Zhao, Speech Emotion Recognition Method Based on Improved Fuzzy Vector Quantization. China: 2008101228062 (2008). [Google Scholar]
  5. Y. Attabi, P. Dumouchel, Anchor models for emotion recognition from speech, TAC, 4, 3 (2013): 280–290. [Google Scholar]
  6. W.M. Zheng, M.H. Xin, X.L. Wang, A novel speech emotion recognition method via incomplete sparse least square regression, IEEE SIGNAL PROC LET, 21, 5 (2014): 569–572. [CrossRef] [Google Scholar]
  7. Q. R. Mao, M. Dong, Z. W. Huang, Learning salient features for speech emotion recognition using convolutional neural networks, IEEE MULTIMEDIA, 16, 8 (2014): 2203–2213. [CrossRef] [Google Scholar]
  8. P. Ekman, W. Friesen, Facial action coding system: a technique for the measurement of facial movement. Palo Alto: Consulting Psychologists Press, (1978). [Google Scholar]
  9. L.H. Liang, H.Z. Ai, G.Y. Xu, A survey of human face detection, Chinese J. Computers, 25, 5 (2002): 449–458. [Google Scholar]
  10. Y. Rahulamathavan, RC. W. Phan, J.A. Chambers, Facial expression recognition in the encrypted domain based on local fisherdiscriminant analysis, TAC, 4, 1 (2013): 83–92. [Google Scholar]
  11. W.M. Zheng, Multi-view facial expression recognition based on group sparse reduced-rank regression, TAC, 5, 1 (2014): 71–85. [Google Scholar]
  12. P.C. Petrantonakis, L.J. Hadjileontiadis, Emotion recognition from EEG using higher order crossings, IEEE T INF TECHNOL B, 14, 2 (2010): 186–197. [CrossRef] [Google Scholar]
  13. S.L. Lin, G.Y. Liu, H.L. Zhang, Application of ACO algorithm to emotion recognition research based on RSP signal, IJCEA, 47, 2 (2011): 169–172. [Google Scholar]
  14. H. Zacharatos, H. Gatzoulis, Y.L. Chrysanthou, Automatic emotion recognition based on body movement analysis: a survey, IEEE COMPUT GRAPH, 34, 6 (2014): 35–45. [CrossRef] [Google Scholar]
  15. Z. Zeng, M. Pantic, G.I. Roisman, A survey of affect recognition methods: audio, visual, and spontaneous expressions, IEEE T PATTERN ANAL, 31, 1 (2009): 39–58. [CrossRef] [Google Scholar]
  16. J. Kim, E. Andre, Emotion recognition based on physiological changes in music listening, IEEE T PATTERN ANAL, 30, 12 (2008): 2067–2083. [CrossRef] [Google Scholar]
  17. C.W. Huang, Y. Jin, Q.Y. Wang, Multimodal emotion recognition based on speech and ECG signals, JSEU (NSE), 40, 5 (2010): 895–900. [Google Scholar]
  18. C. Busso, Z. Deng, S. Yildirim, Analysis of emotion recognition using facial expressions, speech and multimodal information, ICMI 2004, (2004): 205–211. [CrossRef] [Google Scholar]
  19. S. Hoch, F. Althoff, G. Mcglaun, Bimodal fusion of emotional data in an automotive environment, ICASSP 2005, (2005): 1085-1088. [Google Scholar]
  20. A. Sayedelahl, R. Araujo, M.S. Kamel, Audio-visual feature-decision level fusion for spontaneous emotion estimation in speech conversations, ICMEW 2013, (2013): 1–6. [Google Scholar]
  21. R. Tato, R. Santos, R. Kompe, Emotion space improves emotion recognition, ICSLP 2002, (2002): 2029–2032. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.