Toward an Example-Based Machine Translation From Written Text to ASL Using Virtual Agent Animation
Modern computational linguistic software cannot produce important aspects of sign language translation. Using some researches the authors deduce that the majority of automatic sign language translation systems ignore many aspects when they generate animation; therefore the interpretation lost the truth information meaning. This problem is due to sign language consideration as a derivative language, but it is a complete language with its own unique grammar. This grammar is related to semantic-cognitive models of spatially, time, action and facial expression to represent complex information to make sign interpretation more efficiently, smooth, expressive and natural-looking human gestures.