Small thin sensors, either surgically implanted, or otherwise affixed, could monitor relevant muscle groups in the human body. The sensors detect which muscles are active and which are at rest at any given time.
It is possible to tell the difference between a fake smile and a real smile by which face
muscles are contracted (and which are not). So, for example, on flashing an exaggerated salesmen-like smile, a signal could be sent, wirelessly, to a small portable processing device (i.e. modified ipod) with speakers. An audible cheesy smile "Ting!" is then produced.
Angry face: "Dun dun duuun!".
Harsh finger pointing: computer error "Dunt!"
Sweeping arm movement, "Swoosh!"
Squinty eyes with clenched jaw:"Hissssss."
Genuine happy face: Loud choral "Haa-lle-lu-ja!" (My personal favorite) ...and so on.
Since a good portion of human communication is through body language, this could be helpful to the visually impaired, or just for people who are not as skilled at reading body language expressions. An enhanced, audio polygraph of sorts.
Signal is encrypted with unique id to ensure it is read by the correct device (so others with same setup do not process your Bioaudimoticans and visa versa.) Processor has on/off switch. Actions can be mapped to your choice of sound effects or left default. (i.e Sound themes on PC)
Grant it, one could verbally generate the equivalent onomatopoeia while making a gesture. Funny when well timed, but probably not as much as an actual sound effect. In addition, manually activating a portable "sound effect box" would be distracting and not nearly as fluid during conversations. (i.e. Darn, wait, I pressed the wrong sound effect for "eye-roll"...)
Option: Assign individual musical notes (or chords) to your various actions, enabling the creation of "gestural song sequences". Interesting possibilities for a collaborative group of musically inclined, Bioaudimoticon-enabled people.