Various unreliable sources say human communication is
85%
body language and about 10% vocal tone, with just 5%
being word content(!)
Have some personal effectiveness teachers, those people
that say lean towards a person, make eye contact, say
their name, mirror their body language and also
mine
their utterances for "speaking styles", then
mimic/translate the effective body language modality.
Have the personal effectiveness trainers wear microgrid
gesture suits that are used to make computer models of
live humans. Collect data.
Have them interact with people on video chat and live
with
instructions to have the data-sample effective humans to
speak the computer generated AI content.
The visual onscreen is a superrealistic cartoon of the
gesture suit.
Then have the audience, or user, react to the human
body
language expert while the body language expert is acting
like an AI. This becomes a big data file suitable for
computer programs and big data to process. During the
speech with body language data sampling the
effectiveness trainer person is a fast reader and
basically teleprompters the AI chat content.
Then another kind of AI, pattern recognition, like voice
translation or image identification, figures out what the
most effective body language is, based on the
combination
of the audience/user body language, so there is a "next
move" database, or network that interacts with actual
human body language to make people the happiest.
I could see this being valuable at animated films prior or
concurrent with robot girlfriends, companion robots,
and retail staff.