The Babelsynch program takes the audio tracks of a poorly synched video, centers the voices in the stereo field (leaving the existing lack-of-synch intact) and merges with it a string of foreign language words synchronized to, and roughly phonetically-based on, the lip movements of onscreen talkers.
The language/dialect, accent and voice type of the artificially generated voice(s) are user selectable.
This changes a hard to watch video into a properly done voiced-over "translation". To make it even more realistic, the fake voice is sharply attenuated when the original voice is talking.
So yes, we could get Richard Feynman apparently delivering a lecture in guttural Klingon, while the (real) RF's voice comes in as an overdub, and be able to concentrate on the content of the talk rather than trying to ignore the glaring visual/audio dichotomy of an unsynched video.
Polyglots can select the "speaking in tongues" option, causing the program to use syllable combinations which aren't real words in any language.