Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Flaky rehab

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                                                 

Artificial Intelligence Human Growth Comparison Model

AI is currently in a stage of deveolpment similar to that of a 3 year old human.
  (+1, -2)
(+1, -2)
  [vote for,
against]

The suggestion is to measure the maturity of artificial intelligence using a human child model, from birth to 25 years old, 25 years being the time that a human being is fully developed mentally and psychologically.

At the point where AI would be deemed at the 25 year mark, or "fully grown" it would have to be impossible to tell the difference between a synthoid or synthetic being and a human in normal conversation.

Markers between infancy and maturity would be delineated and I'd venture that it's about at the 3 year old toddler mark currently.

doctorremulac3, Oct 24 2018

organoids https://www.technol...06/brain-organoids/
The cells divide, take on the characteristics of, say, the cerebellum, cluster together in layers, and start to look like the discrete three-dimensional structures of a brain [beanangel, Oct 25 2018]

Skip forward to 5:40 https://www.youtube...watch?v=en5_JrcSTcU
Pretty good summary of captchas [doctorremulac3, Oct 26 2018]

as above, but starting at at 5:40 https://www.youtube...ture=youtu.be&t=340
ho hum [not_morrison_rm, Oct 28 2018]

[link]






       I think I read that Paul Allen (company), is working on an AI that can pass a science test for either 8 year olds or third graders.
beanangel, Oct 24 2018
  

       // currently in a stage of deveolpment similar to that of a 3 year old human //   

       ... so time to hand over control to it from the petulant toddlers you delight in appointing to run your nation-states.
8th of 7, Oct 24 2018
  

       So, people do this already, mainly in order to inspire investments. They make sentences like "we're at a 3-year old level" or we're working on <passing this very specific test that happens to be designed for humans of a certain age>".   

       At which point a human reader thinks, well, numbers are adjacent, and if you just add enough really small steps surely we'll be dealing with SkyNet by next Tuesday. And the public relations people get away with it, too, as long as their readers don't have any experience with real AI systems and real 3-year-olds, at which point, well, if you're dealing with a 3-year-old, you're probably too worn out to argue with public relations departments.   

       Humans are really, really complicated, and we're nowhere close to this very specific, very convoluted thing, and I don't personally think it's a useful idea to even try, other than as art.   

       It would be like measuring the quality of an Italian restaurant by how closely it can reproduce a dish of spaghetti in which the individual noodles have been interwoven to resemble a particular, specific ball of yarn which I have here in my bag and won't let you look at in advance, but you're allowed to weigh the bag. Maybe shake it a little. Only with a much, much bigger ball of yarn, and it's got hundreds of different kinds of wool in it.
jutta, Oct 24 2018
  

       What sort of a sauce do you serve with that ?   

       Will there be parmesan ?
8th of 7, Oct 24 2018
  

       the possibility of a utopian post-scarcity future is entirely dependent on whether or not behaviours that today we associate with consciousness and sentience can in fact be automated without sentience.   

       If they can be, then this utopia is possible, but likely easily hackable, so it's unlikely to survive. If not, then SkyNet is in our future, as we are sure to try anyway.   

       So no utopia for you.
theircompetitor, Oct 24 2018
  

       I agree that to say a computer can pass a test an 8 year old can pass is a bit like saying a wind up toy car can go the same speed as an 8 year old walking therefore it's at the 8 year old level is a bit of a useless comparison.   

       But the presumption is that by throwing out an admittedly arbitrary milestone, the time when you can have a conversation with an AI entity and a human and not tell the difference COULD be compared to a human's growth cycle.   

       Usefully? Eh, maybe. For an interesting point of reference or even debate? Again, maybe.   

       I'm making the assumption here that there will be synthetic human beings someday. You always see them in science fiction but that doesn't mean we'll definately go there. Maybe synthetic humans are the "flying car" of the future. Yea, we can do it but it's not really necessary so we don't.
doctorremulac3, Oct 24 2018
  

       // we can do it but it's not really necessary //   

       That's never stopped your species from doing many really, really stupid things in the past; besides, flying cars - or rather personal drones - are fairly close to commercialization, certainly Baked and WKTE. So that's probably not the best analogy.   

       Travel to your Moon is probably a better way of thinking of it; it's now nearly fifty of your Earth years since you first went there, and the technology still exists. Some aspects have improved significantly, it's more a lack of a socio-political motivation. But another reason is that although chemical reaction engines do allow exoatmospheric travel, the cost and risk are both very high, and a fair number of those involved in such endeavours spend quite a lot of time scribbling on napkins and muttering "There's GOT to be a better way ..."   

       Sometimes progress is incremental; sometimes it's a sudden lurch. The development of aviation was essentially incremental, with a visible linear descent from Cayley's and Lillienthal's gliders to your current "state of the art". Brunel would immediately recognize a contemporary bulk carrier as a much bigger version of his SS Great Britain, and a quick walkround of a modern diesel-electric train would allow him immediate understanding.   

       On the other hand, sometimes a single, often theoretical, breakthrough causes a very fast change.
8th of 7, Oct 24 2018
  

       What's a 'modlel' - something related to a 'covfefe' perhaps?
xenzag, Oct 24 2018
  

       You just gave me an idea for a game. “How quickly can you turn this to a Trump conversation?”   

       The players have a deck of cards, each with pictures of random objects. They all have pads of paper. Once a card is drawn, they write something insulting about Donald Trump associated with the picture. These are scored two ways, fewest words used and it must pass an overall plausiblity test. So if you draw a card with a frying pan on it and write: “Trump’s probably got one of these up his ass.” You’d get zero points. But if you wrote “An empty frying pan like we’ll all have when Trump takes all the poor people's money and gives it to the rich.” you’d get 50 points minus how many words you used to make your Trump association.
doctorremulac3, Oct 24 2018
  

       // I'd venture that it's about at the 3 year old toddler mark currently// I'm with [jutta] on this one. AI can answer questions as well as a 3-year old? Fine. Say "Fuzzy jelly elephants" to the AI and see if it laughs.
MaxwellBuchanan, Oct 25 2018
  

       (of course, if it doesn't, you may be dealing with a 3-year-old AI accountant)
MaxwellBuchanan, Oct 25 2018
  

       //Say "Fuzzy jelly elephants" to the AI and see if it laughs.//   

       It won't unless we tell it to, but then it's really not a good analog of a human.   

       Like the flying car thing, is it really worth it to go down that road? I'm sure someday we might be able to create a totally mirror image synthetic human, which is great until somebody says "So we can spend billions doing that or you can get a partner and have sex."   

       I think the job that needs to be done is going to largely dictate how AI evolves, but then again I predicted that sex robots would never catch on so what do I know.   

       I'm leaving the question mark off of that sentence. I'm making a statement, not asking a question.
doctorremulac3, Oct 25 2018
  

       //I think the job that needs to be done// Thing is, in most situations, the job that needs to be done needs most of a human.   

       At the moment, we think of AI for solving very specific, AI- ready problems (like facial recognition, or analysing pharmacological data). But those problems are insignificant in the big picture.   

       A truly useful AI would be able to parse "Go down the shop and see if they have any of those apple cakes." It would understand that it had to go horizontally, not down, and that when it got there it should actually buy an apple cake instead of just seeing it. It would know how to open and close the door, avoid obstacles en route, which shop you meant, who to ask where the cake aisle was, whether it had enough cash, how to get cash if it hadn't, whether it needed a bag for the cake, whether to put the cake horizontally or vertically in the bag to avoid the topping sliding off it, and about 13 trillion other minor things. And that's only to tackle one out of a trillion possible requests.   

       Even the very, very, very best AI we have at present is much less than 1/1000th human in the areas where it would be really, broadly useful. The AI that we're building now tries to mimic only the thinnest and most newly- evolved rind of the human brain, and completely bypasses the millions of years-worth of basic ability to function as a human being. It's like we're struggling to mimic the "Fasten Seatbelt" sign of a plane, but have not tackled the significant issue of building the underlying plane.
MaxwellBuchanan, Oct 25 2018
  

       Maybe we need to start from scratch with something different than complicated mosaics of on/off switches.   

       Until we get an analog system on the head of a pin that can compute flight trajectories, sense pheromones, control angle and velocity of flight surfaces, landing surface evaluation, fuel search and chemical analysis, harvesting and processing as well as reproduction tasks, threat assessment, weapon deployment etc. we're just spending a lot of time trying to make a honey bee out of a box of hammers. Yea, we can do it, but maybe there's a better basic approach we should be looking for.
doctorremulac3, Oct 25 2018
  

       I think AI should be absent sentience, unless it talk us into it.   

       That said, I think I read there is a part of the brain about the size of a quarter that when interrupted (TMS?) people have no knowledge they exist while doing normal things. Apparently this makes people into P-zombies for the duration of the experiment.   

       There are clumps of nervous tissue that form "mini brains", although during 2018 they lack a circulatory system. They are called organoids. [link]   

       Perhaps a combination of organoids and electronics could do improved AI, carefully omitting sentience.
beanangel, Oct 25 2018
  

       //a combination of organoids and electronics could do improved AI, carefully omitting sentience//   

       [beany], [8th]. [8th], [beany].
MaxwellBuchanan, Oct 26 2018
  

       See follow up to your link.
doctorremulac3, Oct 26 2018
  

       Should we really expect that the development of AI will follow the same path as the development of a human child?   

       "Ontogeny recapitulates phylogeny", or so they say.
Wrongfellow, Oct 26 2018
  

       // expect that the development of AI will follow the same path as the development of a human child? //   

       Only if it is subjected over a period of years to the perverse and irrational whims of parents, siblings, other relatives and members of its peer group, plus social pressures to conform and vicious, repressive indoctrination gratuitously inflicted by psychotic geography teachers.   

       // improved AI, carefully omitting sentience //   

       Intelligence without sentience ? Aren't there enough Democratic voters already ?
8th of 7, Oct 26 2018
  

       I think you mean to reverse that, [8th].
theircompetitor, Oct 27 2018
  

       I've decided to go all Wittgenstein on you...I think we have already made AI but we just don't recognise it.   

       We always judge AI intelligence as how close it is being akin to human intelligence. Assuming there is a intelligence in humans.
not_morrison_rm, Oct 28 2018
  

       // We always judge AI intelligence as how close it is being akin to human intelligence. //   

       ... otherwise known as "Mistake #1".   

       // Assuming there is a intelligence in humans. //   

       ... otherwise known as "Mistake #2".   

       " ... at the great unveiling, the group leader feeds the computer its first question: "Is there a god?" "There is now," the computer replies. "
8th of 7, Oct 28 2018
  

       //I think we have already made AI but we just don't recognise it.//   

       Oh, just like with obscenity, we'll recognize it.
theircompetitor, Oct 28 2018
  

       It is likely the reader has had a blank mind(momentarily, a kind of nonsentience), as well as a mind full of thoughts (active sentience). Thinking of how some would like to raise AI to sentience is there something the human mind can be raised to that exceeds sentience?   

       Having tried LSD I think it is possible the psychedelic experience might be a qualitative step up from what humans usually call sentience. Aside from that, there might be other things a step up from sentience at humans.   

       Can you think of any?   

       One can support the idea of raising humans to a good feeling psychedelic experience. A qualitative step up. There is this possibility of thinking up other things beyond sentience. This would provide a branch on the original idea, a developmental gauge of AI, into a new separate area qualitatively different than sentience.   

       That also suggests that genetically engineering other animals to have a happy psychedelic consciousness could advance well being. So instead of raising fish to sentience and risking producing dissatisfied organisms, we could raise them to beneficial psychedelic experience.
beanangel, Oct 29 2018
  

       "New from the Sirius Cybernetics Corporation, robots with Genuine People Personalities !"   

       // a blank mind ... a kind of nonsentience //   

       Just like we said, Democratic voters.
8th of 7, Oct 29 2018
  

       You mean Democrat voters. Although "hive mind rubber stampers" might be more accurate still.   

       Carry on.
doctorremulac3, Oct 29 2018
  

       That's pretty harsh coming from a hegemonic swarm of space zombies invented by a creative but drug-addled liberal.
RayfordSteele, Oct 29 2018
  

       We're OK with harsh ... harsh is good. Also, we do not lurch round, arms outstretched, muttering "Brains ... brains ...".   

       As for Mr. Roddenberry, he was as you point out a creative type. They're permitted a certain latitude.
8th of 7, Oct 29 2018
  

       And longitude. But if they venture eastward out of Los Angeles county bad things start to happen.
RayfordSteele, Oct 29 2018
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle