Date Added: Sep 2010
The authors present a novel approach for Lip synchronization by analyzing the relationship between a person's speech signal and data extracted from his/her lip movements. To model the speech they use a nonlinear-time-varying sum of AM-FM signals each of which models a single formant frequency. The model is then realized using Taylor series expansions such that a closed form formula is achieved which shows the relationship between the speech amplitudes and instantaneous frequencies w.r.t lips varying width and height. Based on the obtained formula, lips movements data are employed to generate a semi-speech signal which is then correlated with the original speech over a span of delays.