Understanding facial expressions is a fundamental problem in affective computing that has the potential to impact both sides of a conversation with a computational agent. Currently static approaches based on techniques such as Gabor transformation represent the state of the art for identifying facial expressions but remain rather slow and unreliable. In this study we introduce and compare a dynamic technique based on optical flow versus this static Gabor baseline, as well as integrating information about static and dynamic facial characteristics into novel fused models. Rather than requiring complex Machine Learning, a simple and fast template model based on K-Means uses a Hidden Markov Model to provide context. The system is trained and evaluated on distinct subsets of the Cohn-Kanade database of adult faces which provides classifications into six basic expressions. Experimental results show that the dynamic expression feature extraction based on the optical flow for facial expression recognition has considerably improved recognition rate. Fusion of the optical flow dynamic expression features with Gabor static expression feature also increases recognition rate somewhat versus the static baseline but the hybrids tested are not competitive with the pure dynamic model.