Wednesday, April 7, 2010

AI Dichotomies

In reading an intereting blog entry (http://monicasmind.com/?p=53), I came across a list of dichotomies that AI researchers need to look into before really working with AI.  As I don't fully understand those listed, I want to take the time to know exactly where I stand with them.

"The most important dichotomies are the Reductionist / Holist split, the Symbolic / Subsymbolic split, the Essentialist / Nominalist split, the Instructionist / Selectionist split, The Infallible / Fallible split, and the Logic / Intuition split (which could also be called the Reasoning / Understanding split)."

Reductionist / Holist
I feel I may say this often, but I see the need for both.  The human mind/brain is a complex system.  I think that, while looking at each individual part gives us more information, we will not truly understand it until we have a concrete holistic view.  I think this also applies to AI.  We can learn much from neural networks, genetic algorithms, computer vision knowlege representation, and so on, but will not truly understand AI until we have a holistic understanding.  Each part is missing data from every other part.

Symbolic / Subsymbolic
This is a much more difficult one, and one that I cam still looking into.  Symbolic AI follows the physical symbol system hypothesis (http://en.wikipedia.org/wiki/Physical_symbol_system).  The greatest criticism comes from symbol grounding, or giving meaning to the symbols.  Subsymbolic came later, after symbolic AI stalled.  Subsymbolic AI includes embodied AI and neural nets.  There isn't a really good definition (that I've seen). 

I honestly don't know if I agree that symbols are both neccessary and sufficient for AI.  Granted, there has been a lot of work done, successfully, but it was also fairly limited.  On the other hand, no approach has been found for a fully successful AI.  I want to study this one more, maybe take a step back and see if there is a prior assumption that has forced us on this path.

Essentialist / Nominalist
Are there essential properties of a specific object that places it within a category?  (http://en.wikipedia.org/wiki/Essentialism)  For instance, are there certain properties that are required to make a specific tiger officially part of the 'tiger' classification?  For our purposes, are there certain properties of a mind that make it part of the 'mind' classification?  Given my own mind as an example, what essential properties exist that MUST be incorporated into an artificial mind to make it a 'mind'?  This has been a bit of a moving target for AI; as soon as one property is replicated, another property is suddenly 'essential'. 

On the other hand, a nominalist would say that there are no universals (categories) that exist. (http://en.wikipedia.org/wiki/Nominalism)  Given all cats, are there any essential properties that differ from those of a different but similar category?  In other words, what makes a cat not a dog?  There are many properties that belong to both.  We humans can typically tell the different between the two.  Some nominalists would say that a mind is required for this distinction.  Or, at least, that the distinction is created solely by something like a mind. 

I don't know where I stand on this one either.  It requires more research and thought.

Instructionist / Selectionist
I cannot find any reference to these terms in Wikipedia, and a Google search gives me links to sites from multiple fields.  I'll need to look into this one a bit more.

Infallible / Fallible
Given how common these words are, I'll just use the standard definition to talk about them.  An infallible AI would be one that has to have 100% correct information and make 100% correct decisions.  I would lump and (non-fuzzy) logical approach in this group.  A fallible AI would have a degree of fuzziness to it, giving a 'likelyhood' measurement for its decisions.

We currently have a single example for human-level intellect, and it's not dolphins.  One thing I can easily say is that humans are easily fallible.  To give something artificial intelligence, why would have force ourselves to make it infallible?  It would create difficulties that might not even be solvable, and would take far too much computational power.  I stand firmly in the fallible camp.

Logic / Intuition (or Reasoning / Understanding)
This one is a bit easier to define and discuss.  It seems to me that logic/reasoning is a learned skill.  A human, even an intelligent one, can survive without being very logical.  After all, politicians do it.  Jokes aside, there are plenty of human beings who couldn't follow the simplest of logic.  Yet, every human being, barring mental issues, is able to act intuitively, based on their experiences with the world.  Humans raised by animals (there are accounts) have instincts/intuition.  They can understand concepts, even if they cannot articulate them. 

However, intuition/understanding is a very difficult concept to program into a computer, unlike logic.  I think that the early AI researchers were looking for an idealized version of intelligence.  Since the more 'intelligent' humans tend to use logic and act logically, logic must be the end all be all of intelligence.  A good website is (http://artificial-intuition.com/index.html).  They claim that intuition is easy to program, but give few hints as to how. 

Conclusion
And there you have it.  The above shows that I know some of what I believe when it comes to AI, but still need to some more research before commiting to more of the details.

1 comment:

  1. Some clarifications based on your reading of my post:

    If we are building an AI bottom-up (and that's the only way to go) then we must be Holistic at the bottom. Reductionism must *emerge* as a topmost layer in a system that's otherwise Holistic all the way down to sensory inputs.

    Going with Nominalism is much easier once you let go of Symbolicism and Reductionism.

    For the Instructionist/Selectionist (or, "Constructionist") viewpoints, see Jean Piaget.

    Intuition *is* easy to program into computers. Naively, if we want to make a program that collects experience ("Learns") and then matches sensory input to those experiences and is *expected to jump to conclusions* in a fallible way, how hard can that be?

    There are deeper issues, such as the Key To AI: Saliency. How to learn only that which will make a difference later, ignoring the rest. How to determine which dimensions are relevant in a 1000-dimensional space that makes up our sensory experience. We (at syntience.com) know how to do this but we're battling the devils of details.

    For much more (later) results on these issues see my talk series at http://videos.syntience.com .

    ReplyDelete