TY - GEN
T1 - Recognition of human voice utterances from facial surface EMG without using audio signals
AU - Arjunan, Sridhar Poosapadi
AU - Weghorn, Hans
AU - Kumar, Dinesh Kant
AU - Naik, Ganesh
AU - Yau, Wai Chee
PY - 2008
Y1 - 2008
N2 - This research examines the evaluation of fSEMG (facial surface Electromyogram) for recognizing speech utterances in English and German language. The raw sampling is performed without sensing any audio signal, and the system is designed for Human Computer Interaction (HCI) based on voice commands. An effective technique is presented, which exploits facial muscle activity of the articulatory muscles and human factors for silent vowel recognition. The muscle signals are reduced to activity parameters by temporal integration, and the matching process is performed by an artificial neural back-propagation network that has to be trained for each individual human user. In the experiments, different style and speed in speaking and different languages were investigated. Cross-validation was used to convert a limited set of single shot experiments into a broader statistical reliability test of the classification method. The experimental results show that this technique yields high recognition rates for all participants in both languages. These results also show that the system is easy to train for a human user, and this all suggests that the described recognition approach can work reliable for simple vowel based commands in HCI, especially when the user speaks one or more languages as also for people who suffer from certain speech disabilities.
AB - This research examines the evaluation of fSEMG (facial surface Electromyogram) for recognizing speech utterances in English and German language. The raw sampling is performed without sensing any audio signal, and the system is designed for Human Computer Interaction (HCI) based on voice commands. An effective technique is presented, which exploits facial muscle activity of the articulatory muscles and human factors for silent vowel recognition. The muscle signals are reduced to activity parameters by temporal integration, and the matching process is performed by an artificial neural back-propagation network that has to be trained for each individual human user. In the experiments, different style and speed in speaking and different languages were investigated. Cross-validation was used to convert a limited set of single shot experiments into a broader statistical reliability test of the classification method. The experimental results show that this technique yields high recognition rates for all participants in both languages. These results also show that the system is easy to train for a human user, and this all suggests that the described recognition approach can work reliable for simple vowel based commands in HCI, especially when the user speaks one or more languages as also for people who suffer from certain speech disabilities.
UR - http://www.scopus.com/inward/record.url?scp=70350043763&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-88710-2_29
DO - 10.1007/978-3-540-88710-2_29
M3 - Conference contribution
AN - SCOPUS:70350043763
SN - 9783540887096
T3 - Lecture Notes in Business Information Processing
SP - 366
EP - 378
BT - ICEIS 2007
A2 - Filipe, J
A2 - Cordeiro, J
A2 - Cardoso, J
PB - Springer
T2 - 9th International Conference on Enterprise Information Systems, ICEIS 2007
Y2 - 12 June 2007 through 16 June 2007
ER -