Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Loading tagline ....

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                       

Tussauds-Designed Robots

Use the experts to overcome the uncanny valley
  (+2)
(+2)
  [vote for,
against]

We currently tend to have creepy gynoids and androids, and it seems very difficult to design sufficiently humanoid machines, so they fall into the uncanny valley and freak lots of people out. However, we have wax museums where the figures are so life-like that they can be impossible to distinguish from humans. Moreover, it isn't even a cutting- edge, new technique to make an inanimate object which looks like a human. Sculptors have been doing it for millenia. They've even been making humanoid automata for centuries.

My proposal, therefore, is very simple: design automata in collaboration with artists who are used to making things which look like humans.

What am I missing here?

nineteenthly, Jul 30 2016

Oculus Lip Sync plugin for Unity http://www.roadtovr...plugin-for-unity-5/
[theircompetitor, Jul 30 2016]

Veeso headset uses eyeshape (probably) to animate face https://www.kicksta...irtual-reality-head
[theircompetitor, Jul 31 2016]

[link]






       I'm not sure if more humanlike appearance would bring oids out the other side of the uncanny valley. It might work as long as they are standing still, but until we can also emulate human movement - and particularly facial expression - this wouldn't work for animated oids.
MaxwellBuchanan, Jul 30 2016
  

       I think you have to take into account the dynamics of motion and expression. For example, Theresa May, now that she’s getting more telly exposure, is quite clearly animatronic. You can see tell-tale glimpses and glitches of extreme terror or horror for a brief millisecond that only a robot could inadvertently produce. Like micro expressions, but made by a mechanical terror-inflicting machine overlord.
Ian Tindale, Jul 30 2016
  

       // they fall into the uncanny valley and freak lots of people out. //   

       Don't drag Tony Blair into this.   

       // the figures are so life-like that they can be impossible to distinguish from humans. //   

       That's the trick, isn't it ? It's obvious that Trump is animatronic, whereas they've had decades to perfect Hilary ...   

       // Moreover, it isn't even a cutting- edge, new technique to make an inanimate object which looks like a human. //   

       It can be done the other way round, too - it worked for Bruce Forsyth.   

       // What am I missing here? //   

       SkyNet, and liquid metal ...
8th of 7, Jul 30 2016
  

       The distinguishing feature of all robots is that they have the appearances and mannerisms of dead famous people, a conjuction which makes their unnatural and inhumane qualities acceptable in our celebrity obsessed culture. After all, who is going to object to a Doorman Mao, or Marilyn Monrover.
WcW, Jul 30 2016
  

       By that measure, it would be fine to have a Dustman Bieber.
8th of 7, Jul 30 2016
  

       I'm dealing with the uncanny valley all the time when we're generating avatars (different than autonomous robots, of course). We randomize the face shape with a range of parameters, add blinking and lip motion. Adding true lip synching and some face motion helps a lot, but of course anything human like is truly challenging, which is why many in VR are still opting for cartoon like avatars. Ultimately, accurate micro muscle face motion solves most of the problem
theircompetitor, Jul 30 2016
  

       theircompetitor, — quick question, you’d be the one to know this: if you were to sync facial expression to generated speech, and the face isn’t the main focus (other information is supposed to be), would you think it would suffice to drive the mouth in a muppet “lip flap” manner driven by amplitude, or would you say one needs to go all the way to proper Preston Blair type of accuracy, given that analysis of the resulting sound produced by speech synth would need to be happening in real time and fed back in to the lip phoneme selection.
Ian Tindale, Jul 30 2016
  

       // solves most of the problem //   

       Can it pass the Voight-Kampff test yet ?   

       Could you describe in single words only the good things that come into your mind about … your mother ?
8th of 7, Jul 30 2016
  

       flapping actually works quite nicely for most things, but perhaps not if you want to whisper sweet nothings with your virtual girlfriend. I've linked an Oculus "real" lip synch plugin, there's competing work out there as well.   

       [8th] if I get to the point of worrying about this test I'll have nothing else to worry about :)
theircompetitor, Jul 30 2016
  

       I was once standing still in a museum and another visitor mistook me for a waxwork.
pocmloc, Jul 30 2016
  

       So that decision to tile the cube was a mistake, then? Also, how are the hanging baskets standing up to the hot weather?
MaxwellBuchanan, Jul 30 2016
  

       So All that is left to do to have useful animated companions, is to pretty them up a bit?   

       Fast food restaurants will soon replace more of their workers with bots.   

       The food will contain warnings like: May contain trivial amounts of robot elbow grease and/or makeup.
popbottle, Jul 30 2016
  

       Maybe a distinctive facial marking (shimmering circuitry etc) would be enough to set the humanoid apart as to remove the creepiness.   

       My question is why copy human form? It's a blank slate.
wjt, Jul 30 2016
  

       //would you think it would suffice to drive the mouth in a muppet “lip flap” manner driven by amplitude//   

       I'm not sure its even that precise in some applications. Sometimes deliberate obfuscation hides the original language and you just see 'a character talking' without trying to read anything more into it, besides the accompanying soundtrack.
bigsleep, Jul 30 2016
  

       theircompetitor, — thanks, that seems very useful. A few months ago I was trying to get a rig up and running that used webaudio to sense the level to drive a lip-flap test of a Sprite sequence of me opening and shutting my mouth, which (with a bit of filtering to slow it down) worked fairly well but not convincingly. The next step was to be to sense the spectrum to distinguish “ooh” and “eeh” sound differences (I wasn’t convinced I needed a full viseme set, if the head isn’t the main focus) and switch the animated Sprite to a sequence that showed my mouth doing that. I couldn’t get that working properly between JavaScript webaudio and canvas yet.
Ian Tindale, Jul 31 2016
  

       The simpler solution is to train TV presenters to use simple lip-flapping when talking. After a while it will be accepted as normal.
MaxwellBuchanan, Jul 31 2016
  

       //The next step was to be to sense the spectrum to distinguish “ooh” and “eeh” sound difference//   

       Basic harmonic analysis. I'm doing something similar with accent recognition in my free time (which is most of the time). Still need something ?
bigsleep, Aug 01 2016
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle