This research examines the evaluation of fSEMG (facial surface Electromyogram) for recognizing speech utterances in English and German language. The raw sampling is performed without sensing any audio signal, and the system is designed for Human Computer Interaction (HCI) based on voice commands. An effective technique is presented, which exploits facial muscle activity of the articulatory muscles and human factors for silent vowel recognition. The muscle signals are reduced to activity parameters by temporal integration, and the matching process is performed by an artificial neural back-propagation network that has to be trained for each individual human user. In the experiments, different style and speed in speaking and different languages were investigated. Cross-validation was used to convert a limited set of single shot experiments into a broader statistical reliability test of the classification method. The experimental results show that this technique yields high recognition rates for all participants in both languages. These results also show that the system is easy to train for a human user, and this all suggests that the described recognition approach can work reliable for simple vowel based commands in HCI, especially when the user speaks one or more languages as also for people who suffer from certain speech disabilities.