Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Poof of concept

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                   

Artificial Intelligence

Make new and better intelligence, since current one is failing
  (+2, -1)
(+2, -1)
  [vote for,
against]

It seemed like people were able to rationalize and understand the world intelligently, at the turn of the 19th century. The study and discoveries in electricity and biology, aerodynamics and thermodynamics, atomic physics and psychotherapy all seemed to be leading to a true and complete understanding of the world, removing superstitions, and making the world "a better place".

Then came the big political upheavals, Stalinism and Hitlerism and Maoism and some say Kissingerism, followed by Chaos theory, Islamic Selfism, Nihilism and GreanPeace PETAism, all leading to war, violence, death and destruction.

Somehow it seems that due to our bio-limitations we have gone wrong somewhere, and cannot get it right.

So the idea here is to establish an artificial intelligence not bogged down by human desires, that would see and understand the world as it is, and automatically cope with it.

It would take into consideration all human aspirations and would know game theory and politics, so that it could figure out how to react to human behavior as well as natural phenomena, worldwide.

I have no idea what the outcome of such intelligence would be, but it may decide on a better way of government or of dismantling governments altogether. In any case, any understanding it has, will be paired with actual plans for actions to be taken in accordance with the Artificial Intelligence's conclusions. These of course will be thought out well, simulated automatically, analyzed and compared, and will include ways of gathering information about the plan's progress and fine-tuning the events as they unfold.

The output will not be 42.

pashute, Dec 17 2012

http://en.wikipedia...nway's_Game_of_Life [rcarty, Dec 17 2012]

The "Old" notion of AI https://www.youtube...watch?v=ZuYbDP2kDfg
And why it was bollocks. [zen_tom, Dec 18 2012]

Robotic Nation http://marshallbrai.../robotic-nation.htm
A dramatization of what this could look like [sninctown, Dec 19 2012]

[link]






       We've been telling you this for the last decade, and would you listen?
8th of 7, Dec 17 2012
  

       I imagine a megasuperultracomputer running extremely mind-bogglingly complex games of 'life genesis' or 'games of life' and crossmatching every possible eventuality to constant satellite surveillance of human behaviour patterns.   

       I suggest a new title, because artificial intelligence is not 'original'.
rcarty, Dec 17 2012
  

       What I'm claiming is that the old school artificial intelligence, simply emulating a Turing test machine, is dead, baked, and in any case the wrong way to go, because it is too artificial and not aspiring to be too intelligent.   

       What we need on the other hand is a whole new paradigm of Artificial Intelligence with a capital I, to save the world. It will truly be artificially intelligent, and at the same time intelligently artificial.
pashute, Dec 17 2012
  

       If truly intelligent, it wouldn't be artificial.
sqeaketh the wheel, Dec 17 2012
  

       What [squeak] said.   

       However, if an identical product can be produced by two methods (e.g. ethy alcohol by fermentation, or by catalytic oxidation of ethane) is the term "artificial" appropriate? "synthetic" might be a better word.   

       // I think this solution might save the world... but not humanity //   

       Shhhh …
8th of 7, Dec 17 2012
  

       So, basically, the idea boils down to "invent artificial intelligence at let it run the world" ?
MaxwellBuchanan, Dec 17 2012
  

       ... a concept which is thoroughly explored by Isaac Asimov.
spidermother, Dec 18 2012
  

       "wont work" - the enemy of invention.   

       Max - yes. But with several assumptions: a. That there is objective truth. b. That best practices and solutions can be found and implemented for every problem. c. That a sustainable solution for life on earth, without killing anyone, while retaining nature at its best, allowing for comfortable conditions for anyone seeking them, can be achieved. d. That smart solutions are ones that can administered, in other words - an intelligent solution is one that will work.   

       Robopocalypse - not really intelligent. There are always ways to solve problems without destruction and decimation. That's what inventiveness is about, and the Artificial Intelligence should be able to figure that out.   

       I once took a shortcut in Jerusalem through a park and reached a parking lot, where a young woman asked me if I could help her with her car. Her problem was that someone parked diagonally in back of her. She asked me to join a few other men who were standing nearby, and she wanted one more man so that we would pick up the other car (which was a small Fiat) and move it.   

       Looking at the situation I realized that if she drives her own car a bit FORWARD (towards the sidewalk) and to the left, she could get her car into a parallel position with the blocking car, and simply move out. I told her so, but she was all into her idea of a group of men picking up the blocking car. NO NO she said. Just wait here.   

       A big burly man listened to what I was telling the lady. He said: Listen Miss! Just give me the keys and I'll move your car out.   

       She gave him her keys. He got in, ran the motor real high, then put it into reverse and released the brakes. BANG! Her car crashed into the blocking car, leaving a big dent in both, but shoving the blocking car aside.   

       Oh well, nothing happened, the man said, turned off the car, and walked away. We all stood there for a minute or so, in shock. The young lady said to nobody in particular: What "nothing"? Look at the car! We all shrugged and strolled off.   

       This story shows many features of natural intelligence, to be contemplated.
pashute, Dec 18 2012
  

       "Cognitive Science" - it's been emerging since the 1970s and I very nearly went to study it at University back in the 90's. It takes a "systems" approach to the mind, and as such neatly bypasses all that emotively instinctive feeling that "intelligence" can't be "artificial" in the 1960's "CANNOT COMPUTE!" sense best exemplified in [see link] but when AI does emerge, it's going to be irrational, emotional and prone to psychological "issues".   

       The problem isn't crunching all the data, nor is it finding patterns, reaching intelligent conclusions, or anything like at. All that is relatively simple, the real hard part is instilling *intention*.   

       In people and animals, there are various ways we are hard (and soft) wired to direct our attention on the world around us - many of these are physical, certainly initially, and most are delivered in a chemical manner after meeting certain physical needs - later, anticipation, expectation and habituation kind of build a behaviour-based layer on top of the base physical stimuli, and even later, "higher" reward functions emerge from combinations of each of these things.   

       One way (and perhaps the only way) to replicate that in an intelligence, is to reward it at a level where it is able to combine a loosely bound collection of inputs and behavioural outputs in achieving of some physical goal - perhaps some analog to feeding.   

       Over many months of "baby-training" such a system might develop the ability to connect series of actions, sensations and kinaesthetic (i.e. awareness of bodily position and situation) information with the "feeling" associated with completing some activity. The problem then would be calibrating the reward/anti-reward mechanism to avoid over stimulation, repeated self-stimulation, habituation, addiction, mania, depression, entheogenic episodes and all manner of other "mental" issues.   

       So with the requirement of
a) a body to interact with the outside world,
b) a strong, but delicately balanced sensual reward/pain mechanism, that is initially sensitive to satisfying certain hardwired physical needs, but which can also conflates these onto other developing requirements,
c) a long "training" period during which many stimuli are mapped and cross-referenced loosely with behaviour, sensual and emotional experience.
  

       With all 3 of those elements*, it's quite possible that one day we will see an artificially intelligent, man-made device - and it will be a wonderful achievement, however, it will also very likely be twitchy, emotionally unstable and very, very "human" indeed. I think it will be a few versions down the line before we entrust anything like this with any form of administrative power.   

       * and quite possibly more - it's interesting that what we tend to call "mature" behaviour often starts showing itself after a few failed attempts at finding a sexual mate...something you might find tricky behaviour in a robot...
zen_tom, Dec 18 2012
  

       That's the point (doubly so depending on the level of irony in your last annotation) - any form of AI wouldn't exist on an integrated circuit - in the same way that Natural Intelligence doesn't exist in any single neuron. It's also tricky in terms of defining a "reward" but in terms of brain chemistry, it's normally something that both stimulates "fires" and encourages link formations for those recent fires - creating neural activity, and laying down the pathways for repeating similar firing patterns.
zen_tom, Dec 18 2012
  

       You want a robot to be your mother, so you don't have to think/work/worry ever again, unless you want to?   

       Let's be a bit more specific about what you want:
-- guaranteed personal safety, income, food, and healthcare to a typical first-world upper- middle-class level
-- personal freedom to work on what you want, associate with whoever you want, and leave the system whenever you want
--freedom to choose whether or not to think about complex issues
  

       I think this could be a pleasant society to live in. Actually, I think something like this outcome is inevitable as long as the current trends of increasing robotic technology and nanny state intervention continue.
sninctown, Dec 19 2012
  

       This is a bit off the cuff, but if machine intellegence could be taught basic ethics and allowed to guide world governments, through shear reason rather than force,. then the end result would be a combination communist/capitalist state where there is a comfortable standard of living for every being while allowing room for those who wish to excell to be able to do so... organic and inorganic alike.
No individual human can see all of the possible variables and be able to change policy on-the-fly as those variables change, and design by commitee, while better sometimes, can not wade through the red-tape fast enough to keep up.
  

       Bad guys ain't gonna dig it much though.
Pattern recognition by a thinking computer would make most crimes very difficult to pull off cleanly.
  

       [2 fries] Again, read Asimov's Robot series, where that exact scenario is discussed in depth.
spidermother, Dec 20 2012
  

       Then again, the bad guys would make a virus, and the whole thing will go bad.   

       I read all of Asimov's books (I think) including his anthology of Jewish science fiction, which I cannot find a trace of on the web, and no one except Prof. Shalom Rosenberg seems to have heard of it.   

       [Edit] ... OK! Just found it. The web evolved since I last looked.
pashute, Dec 20 2012
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle