A recent commenter (yes, people do comment here, on occasion) said that building AI from the bottom up is the only way to go. I spent some time thinking about that, and I have to agree. Then I started expanding my thinking to include all artificial complex adaptive systems (ACAS). If I wanted to build an ACAS, I would HAVE to start from the bottom, then test from the top down.
This sounds like an axiomic system. Create a set of axioms (building blocks) and see if you can reach a set of higher-level principles with those axioms. If not, you might have bad axioms, bad operators to use on those axioms, or just a bad goal in general. Even reaching the principle might not prove anything, for the same reasons.
The downside (the only one I can see so far) is Godel's Incompleteness Theorem. The mind is self-referential. Godel shows use that axiomic systems run into problems when they become self-referential.
Overall, this sounds like a promising avenue to explore. Can any complex adaptive system be translated into an axiomic system? If so, how does Godel come into play? Is it even relevant?