Welcome to Scholar Publishing Group

Zoology and Animal Physiology, 2022, 3(2); doi: 10.38007/ZAP.2022.030204.

Animal Application in Oral English Recognition System

Author(s)

Yunfeng Qiu, Dengfeng Yao and Xinchen Kang

Corresponding Author:
Dengfeng Yao
Affiliation(s)

Beijing Union University, Beijing, China

Abstract

Under the upsurge of artificial intelligence, the development of language recognition system has become inevitable. Now the speech recognition system has been applied to many aspects. However, there have been many difficulties in the recognition of spoken language. The purpose of this paper is to find ways to solve the problems of spoken English recognition system from animals. This paper briefly introduces animal language and spoken language recognition system through literature research and investigation. The influence of animal emotion analysis methods on the accuracy of spoken English recognition was compared through a comparative experiment. Through a questionnaire survey, the support rate of the application scenario of this technology is analyzed. The results show that the accuracy of animal emotion analysis method for male spoken English recognition in fear state has increased by 27%, and for female spoken English recognition in anger state has increased by 29%. At present, the technology corpus has a single language, and the extracted emotional features have great limitations, and there are still many places to be improved. 37% of the people hope that the spoken English recognition system based on animal emotion analysis can be applied to psychological monitoring. The current high-pressure and fast-paced life in society makes many people have psychological problems and their psychological needs will be increasing.

Keywords

Animal Language, Spoken English, Recognition System, Language Communication

Cite This Paper

Peiyu Sun and Yue Shi. Animal Application in Oral English Recognition System. Zoology and Animal Physiology (2022), Vol. 3, Issue 2: 43-55. https://doi.org/10.38007/ZAP.2022.030204.

References

[1] Yi, MoungHo; Lim, MyungJin; Ko, Hoon(2021) Method of Profanity Detection Using Word Embedding and LSTM, Mobile Information Systems,6654029 https://doi.org/10.1155/2021/6654029

[2] Yawen Xue, Yasuhiro Hamada, & Masato Akagi. (2016). “Emotional Voice Conversion System for Multiple Languages Based on Three-layered Model in Dimensional Space”, Journal of the Acoustical Society of America, 140(4), pp.2960-2960. https://doi.org/10.1121/1.4969141

[3] Soumen Kanrar, & Prasenjit Kumar Mandal. (2015). “Detect Mimicry by Enhancing the Speaker Recognition System”, Advances in Intelligent Systems and Computing, 339(1), pp.21-31. https://doi.org/10.1007/978-81-322-2250-7_3

[4] Jan Vanus, Marek Smolon, Jiri Koziorek, & Radek Martinek. (2015). “Voice Control of Technical Functions in Smart Home with Knx Technology”, Lecture Notes in Electrical Engineering, 330(1), pp.455-462. https://doi.org/10.1007/978-3-662-45402-2_68

[5] Ibrahim El-Henawy , Marwa Abo-Elazm, (2020). Handling within-word and cross-word pronunciation variation for Arabic speech recognition (knowledge-based approach), Journal of Intelligent Systems and Internet of Things, 1(2), pp.72-79 https://doi.org/10.54216/JISIoT.010202

[6] Washani, N. , & Sharma, S. . (2015). “Speech Recognition System: A Review”, International Journal of Computer Applications, 115(18), pp.7-10. https://doi.org/10.5120/20249-2617

[7] Saksamudre, S. K. , Shrishrimal, P. P. , & Deshmukh, R. R. . (2015). “A Review on Different Approaches for Speech Recognition System”, International Journal of Computer Applications, 115(22), pp.23-28. https://doi.org/10.5120/20284-2839

[8] Cao, J., van Veen, E. M., Peek, N., Renehan, A. G., & Ananiadou, S. (2021). EPICURE: Ensemble Pretrained Models for Extracting Cancer Mutations from Literature. In 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), pp. 461-467. https://doi.org/10.1109/CBMS52027.2021.00054

[9] Polap, Dawid, Wozniak, Marcin. (2019). “Voice Recognition by Neuro-heuristic Method”, Tsinghua Science and Technology, 24(1), pp.9-17. https://doi.org/10.26599/TST.2018.9010066

[10] Migowa, A. N. , Macharia, W. M. , Pauline, S. , John, T. , & Keter, A. K. . (2018). “Effect of a Voice Recognition System on Pediatric Outpatient Medication Errors at a Tertiary Healthcare Facility in Kenya”, Therapeutic Advances in Drug Safety, 9(9), pp.499-508. https://doi.org/10.1177/2042098618781520

[11] Jemai, O., Ejbali, R., Zaied, M., & Amar, C. B. (2015). “A Speech Recognition System Based on Hybrid Wavelet Network Including a Fuzzy Decision Support System”, International Society for Optics and Photonics, 9445(2), pp.944503. https://doi.org/10.1117/12.2180554

[12] Mannepalli, K. , Sastry, P. , & Suman, M. . (2015). “Mfcc-gmm Based Accent Recognition System for Telugu Speech Signals”, International Journal of Speech Technology, 19(1), pp.1-7. https://doi.org/10.1007/s10772-015-9328-y

[13] Nicholas R. Monto, Rachel M. Theodore, Adriel J. Orena, & Linda Polka. (2016). “The Native Language Benefit for Voice Recognition is not Contingent on Lexical Access”, Journal of the Acoustical Society of America, 140(4), pp.3227-3227. https://doi.org/10.1121/1.4970196