Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
It's as much a hovercraft as a pancake is a waffle.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.



Please log in.
Before you can vote, you need to register. Please log in or create an account.

From 1 to AI

Create AI by counting
  [vote for,

Machine level programming is a series of 1's and 0's. Take a computer and write a bit of software that is capable of remembering a really long number, adding one to this number, and writing this number to the hard drive of another machine, starting in the boot sector. Additionally, this bit of software should be able to hard-terminate the second machine when it senses that the second machine has stalled, and should be able to boot it up after writing to its hard drive. This is the control computer.

Now take this second machine, probably a virtual machine, and using the control computer, add one to the number stored in memory, write the new value to second machine's hard drive, starting in the boot sector, and then boot it up. At first the machine will do nothing useful before the control computer senses that it has frozen and terminates it. But after several cycles, sequences of 1's and 0's will form addresses in the boot sector that reference sequences on the hard drive, some of which will be valid instructions.

After billions of cycles, it may be possible to find one or more viable number sequences, that may or may not do anything useful, but do not stall when run. After many many more cycles, it may be possible to find one or more viable number sequences that are capable of rewriting the hard drive, without the system crashing or stalling.

This method is heavily dependent on the nature of the machine (computer, robot, virtual, etc.) and the environment (real life, virtual simulation) in which it operates and also the type of processing components and their machine level designs.

Would it be possible, given enough time, to find a sequence that made it possible for the machine to learn, adapt itself to its environment, and adapt its environment to itself?

Once a viable number was found, the sequence could be written to other machines with precisely the same construction to replicate the results. Different "life" experiences (via variations in any input sensors) may affect the viability of the sequence as different areas are written and rewritten during each trial but eventually something may be found that can survive any circumstance without hanging.

A brute force method to AI, but with no need for human understanding or programming within the black box of the system. At least not until we find something that looks like it works, and can then crack its head open a see what's in there.
LimpNotes, Nov 29 2015

https://libraryofbabel.info You will find the full instructions for creating the perfect AI in a document here. [pocmloc, Nov 29 2015]

Wolfram Alpha Turing Machines https://www.wolfram...TuringMachines.html
Here's an implementation of the idea, just need to employ someone to keep on incrementing the index numbers [zen_tom, Nov 29 2015]

The coding level involved is Machine Code. https://en.wikipedi...g/wiki/Machine_code
[LimpNotes, Nov 30 2015]

"Analogue" digital computing through self-programming http://www.damninte...origin-of-circuits/
Fascinating and slightly scary. [AusCan531, Dec 01 2015]

Adrian Thompson Publication http://citeseerx.is...1&rep=rep1&type=pdf
[LimpNotes, Dec 01 2015]

Chinese Room https://en.wikipedi...g/wiki/Chinese_room
[the porpoise, Dec 01 2015]


       //Would it be possible, given enough time, to find a sequence// Define "enough time". How soon are you going to get bored? After a day? A year? A couple of centuries? The heat death of the universe?
pocmloc, Nov 29 2015

       We are evidence that it won't take THAT long.
LimpNotes, Nov 29 2015

       // this bit of software should be able to hard-terminate the second machine when it senses that the second machine has stalled //   

       there's your problem. this is called the "halting problem" and is something computer programs can't do yet.
sninctown, Nov 29 2015

       I assume first and second machines are full working systems with applications running otherwise I don't think there is enough 1's and 0's stuff as working clay to get this life to bootstrap. The initial step from mineral to animal. Not enough complexity and the loop will stay simple.   

       Computers are just too simple. There also needs a positive pressure for the running of the snippets to indicate to this overall system that they are beneficial. Anything beneficial should have a mark for random repetition and adaption.
wjt, Nov 29 2015

       The Control Computer is a working system whose only real job is to keep track of the number that is being tested and write it to the second computer's hard drive.

The second computer has no programming except what the Control Computer writes to it. No snippets. No operating system. Nothing. Imagine a clean system having a 1GB hard drive and 512 Byte boot sector. The instructions in the boot sector, the information on the drive, and the machine-level infrastructure of the processor, memory, and other components, are what create the computing experience we know. So running this experiment would recreate "operating systems" and "applications" in course. Headline reads: First AI developed, immediately sued for copyright infringement.

With a 1GB hard drive and assuming 90MB/s write speeds, it would take about 32 years (keeping in mind the entire drive must be rewritten each time) in writing time alone to run this experiment. Over the course of this time, the system would recreate drivers and communications protocols for talking to its peripherals, including external drives, and the WWW. Keeping track of useful portions is not necessary because the useful patterns will reemerge several times throughout the experiment. I imagine this as developing in several layers... or maybe concentric rings would be a better model. A stable boot sector, a stable hard drive, stable communications with secondary drives and peripherals, and finally stable communications with the outside world, including human operators and the WWW.

Measuring intelligence is the hard part, and as [sinctown] mentioned, so is escaping non-intelligent loops and stalls. Intelligence cannot be measured in a vacuum so there will certainly need to be more thought put into how to do this, but that is what Turing was up to. Certainly there will be affects that can be seen as more intelligent than others, such as writing to peripheral hard drives rather than the main hard drive and the amount of traffic over the WWW. By putting an artificial time limit on the run cycles and calculating the number of "intelligent" behaviors during that time frame could offer a solution. The more "intelligent" sequences being saved for further trials, analysis, and scrutiny.
LimpNotes, Nov 29 2015

       I may have misunderhended the idea, but it seems like an algorithm for in-silico evolution of intelligence.   

       If so, this is a widely studied field. Neural networks, for instance. What I don't see in the idea as posted (but may have missed) is a mutation step. Also, you would probably want a step that shuffles and recombines the best algorithms (ie, sexual reproduction), as well as throwing in mutations.   

       But, as I said, I may have missed something in the post.
MaxwellBuchanan, Nov 30 2015

       Not sure if trolling or serious? The mutation is done by adding one. You have a biology background. Imagine a genetic sequence 8 589 934 592 bases long. Now imagine that instead of 4 neucleotides (not counting the 20 or so artificial ones) there are only two. Call them 1 and 0. Start with a sequence that reads 000000......001, stick it in a cell and see what happens. Death most likely. Now add one (in binary) and you get 000000......010. Stick that in a cell and see what happens. Again probably death. Continue the cycle until you've reached 111111.......111. Somewhere in there, there may be a viable sequence. Consider that the human genome is only 3 billion base pairs.

That's analogous to what I am proposing. Try every possible sequence given the space limitations and see what comes out. Now 1GB of data is not much storage space, but peripheral storage allows room for this. All that needs to develop out of the 1GB is the means for autonomously manipulating data to and from the peripherals, including WWW. Once that is done, the peripheral hard drives can be incorporated into the intelligence as a storage device.

The time for the test can be cut in half by running two systems simultaneously, working on different portions of the possible sequence. More systems decrease the time even further.

Maybe a link to Machine Code would help explain things? <link>
LimpNotes, Nov 30 2015

       No, not trolling.   

       If you're adding 1 each time, you're not really evolving, and it will be a hugely inefficient process.   

       If your "genome" is 8 589 934 592 bases (bits) long, then it will take 2 ^ (8 589 934 592) attempts to try all possible numbers. Even at 10^50 attempts per second, it would take longer than the age of the universe (by far) to try them all.   

       An evolutionary algorithm is much, much, much, much more efficient than this, because partially- successful individuals undergo minor mutations to yield the next generation.
MaxwellBuchanan, Nov 30 2015

       Even if you were to stumble upon a brute force-developed AI that worked, there would be no guarantee that you'd want the result to survive, as it would most likely become a brute with no knowledge or use for the three laws of robotics or any such preventative algorithm that cares a lick for fragile life as we know it.
RayfordSteele, Nov 30 2015

       Perfect! Thanks for explaining my error [Maxwell]. The idea was to make a black box system that needed no human interaction to produce a result, save for judging the intelligence that came out of it. It all seemed too easy, like someone would have tried it before, but I understand the error now.
LimpNotes, Nov 30 2015

       //It all seemed too easy, like someone would have tried it before, but I understand the error now.//   

       Well, you're on the right lines, but evolution with incremental improvement is infinitely better than trying all possibilities.   

       There's a classic example in (I think) a Dawkins book. First, write a program that generates random strings of letters until you get "To be or not to be". It will take roughly 26^13 (which is roughly a billion billion) tries.   

       Second, write a program which randomly mutates a string of initially random letters, each preserving whichever mutant has the most letters in common with "To be or not to be". You get there in something like 100-200 goes.
MaxwellBuchanan, Nov 30 2015

       The paradox being that we are attempting to build systems that conform to a script of what we identify as intelligent, whereas any true intelligence would be capable of violating that script per its own volition. I can't shake this feeling that any work we are doing is merely taking a random walk through all possibilities.
LimpNotes, Nov 30 2015

       Something very similar to what you're talking about in my [link] where the results of digital chips programming themselves provided a very analogue looking result. For any really complex device programmed using this method we have basically zero chance of really understanding how it operates. Fascinating and slightly scary.
AusCan531, Dec 01 2015

       Amazing Link!
LimpNotes, Dec 01 2015

       Funny [Bigs]. I'll link the original research for anyone who is interested.
LimpNotes, Dec 01 2015

       No, I'm not Adrian Thompson. Where does your thinking fall on the curve [Bigs]?
LimpNotes, Dec 01 2015

       //The paradox being that we are attempting to build systems that conform to a script...   

       What is intelligence? Perhaps, within the confines of existing processors and memory, modern AI is already as intelligent as possible. Your idea is bound by the same confines. The Chinese room [link] comes to mind.   

       Intelligence as we know it is inescapably linked to the senses, the flow of chemicals, our microbial flora, and a program that has had 4b years of haphazard patches across trillions of installs. We can imagine what true machine intelligence looks like in the same way that we can imagine a new spectrum of colour.   

       Your idea is the kernel of how it has to be done. Machines need to create themselves according to their own needs and decisions. We can perhaps kick it off, but it has to evolve without us meddling.
the porpoise, Dec 01 2015

       I still think Adrian Thompson's chips are amazing. I might even go so far as to say that they are more impressive than the neural networks. There is no foreseeable reason for them to be able to detect two different tones without clocks and yet they do. To quote: "Nature found a way." To me this is a real evolution of intelligence to my way of thinking... in a breaking from the script sort of way... even if extremely limited it its scope. Neural networks are capable of much more complex problems, learning, and fitting our definition of intelligent, but they are just running the algorithms they have been programmed with.
LimpNotes, Dec 01 2015

       + for getting those links. Also bunned a good idea for each of the contributors. (note2selfie, HBR, COM shifting, Pay as you learn, and Taser round hunting rifle - for the Thompson link) Thanks all!
pashute, Jul 02 2017

LimpNotes, Jul 04 2017


back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle