The deed is done.
I and my friend Nils made a “sprint” last night, where we finished the first version of our seed AI.
First we made a simple IQ-function that tests how well a program (a Program Generator or PG) can generate new programs (leaves) from feedback of how close a leaf is to what we want.
A PG that receives the best IQ so far gets a chance to generate new PG’s, in effect it becomes a Program Generator Generator. We call this state a Challenger. When a Challenger generates a new PG with the best IQ so far, it has verified that not only does it have good IQ, but it can produce other programs with good IQ, and is thus promoted to the status of Master (and the smart PG gets to be Challenger).
The programs are generated and run in a circular buffer under a virtual machine, where all sequences of integers are valid programs and no operators can throw exceptions. Such a VM is much slower than machine code, but the process gets faster than if it were running on bare bones x86, because on an x86 (or other architecture) most bytes are meaningless and will throw exceptions, which are slow to process. A PG that is good enough (for example a human) to understand how to write code without generating (many) exceptions, would theoretically run faster on x86, but our current primitive PGs will benefit from a virtual environment.
Anyway, we wrote the code, pressed Enter (the title is a reference to the nice movie Pi), and voĆla, our random generating seed started finding more intelligent programs than itself - Challengers. After a while, Nils calls it 15 seconds, a Challenger managed to become Master and after a longer while the Master produced a Challenger that later became the third generation Master. Spectacular!
Now we just need a better IQ test and a way to inspect the generated programs! Well, we also need tons and tons of hardware. This is the sort of task that could happily use up Google’s entire computer armada for a year and still benefit from more. Hmm.. perhaps if I ask them nicely..
I and my friend Nils made a “sprint” last night, where we finished the first version of our seed AI.
First we made a simple IQ-function that tests how well a program (a Program Generator or PG) can generate new programs (leaves) from feedback of how close a leaf is to what we want.
A PG that receives the best IQ so far gets a chance to generate new PG’s, in effect it becomes a Program Generator Generator. We call this state a Challenger. When a Challenger generates a new PG with the best IQ so far, it has verified that not only does it have good IQ, but it can produce other programs with good IQ, and is thus promoted to the status of Master (and the smart PG gets to be Challenger).
The programs are generated and run in a circular buffer under a virtual machine, where all sequences of integers are valid programs and no operators can throw exceptions. Such a VM is much slower than machine code, but the process gets faster than if it were running on bare bones x86, because on an x86 (or other architecture) most bytes are meaningless and will throw exceptions, which are slow to process. A PG that is good enough (for example a human) to understand how to write code without generating (many) exceptions, would theoretically run faster on x86, but our current primitive PGs will benefit from a virtual environment.
Anyway, we wrote the code, pressed Enter (the title is a reference to the nice movie Pi), and voĆla, our random generating seed started finding more intelligent programs than itself - Challengers. After a while, Nils calls it 15 seconds, a Challenger managed to become Master and after a longer while the Master produced a Challenger that later became the third generation Master. Spectacular!
Now we just need a better IQ test and a way to inspect the generated programs! Well, we also need tons and tons of hardware. This is the sort of task that could happily use up Google’s entire computer armada for a year and still benefit from more. Hmm.. perhaps if I ask them nicely..
You're right, it sounds very similar on a high level, & I am sure there are many people who'd agree with the definition. But I don't know of anyone who used it to derive a universal, low-level, quantitative criterion to select inputs & algorithms. The key is to start from the beginning: raw sensory inputs, & "test" their predictive value, in the process discovering more & more complex patterns. That's what scalability is all about, if you can't evaluate pixels, it'll be super-exponentially more difficult to start from more complex data. That's why I think Cyc, NLP, & high-level approaches in general are hopeless for AGI.
I am sorry, but your "Intelligence test" idea, besides it being entirely hypothetical & presumably externally administered, has it exactly backwards. Just like many Algorithmic Learning approaches, you want to generate patterns & algorithms, instead of discovering them in a real world. Quite simply, we predict from experience, these patterns & algorithms will have *no* predictive value beyond mere chance, unless they're derived from the experience. Notice that the difference between patterns & algorithms is strictly in their origin: the former are discovered & the later are "invented".