Friday, December 22, 2006

Computers cannot handle every hue of human behavior

The dream of a mechanical mind is centuries old and since the industrial age humans have fastidiously held that the brain, the seat of thought -- which in the 19th century was described as a property at par with electricity, the faculty of motion and impenetrability --could be duplicated. The dream of mechanized thought, however, began to resemble a vision only towards the close of the last century when Charles Babbage, with suitable encouragement and inspiration from Lady Lovelace, invented the 'difference machine' that could, at the heave of a designated crank, tabulate any function, and in principle even perform that most emblematic of all intelligent activities - play chess.

Then came the World Wars. The immense logistics involved hastened the development of number crunching computers. Meanwhile, on another arena in the war theatre, scientists examining the plethora of brain damaged patients thrown up by the fighting concluded that the operations of a nerve cell and its connection with other nerve cells could be shown in terms of logic. Thereon it was deducted that intelligence could be shown in purely mathematical or computational terms. Thus was the cast assembled.

The creators of artificial intelligence did not seek to merely design efficient goal-oriented machines, like say the pocket calculator, but their ambition was to replicate the human thought process itself and thus provide scientific answers to the philosophical questions concerning the nature of knowledge that had gripped mankind down the ages. They believed that they had hit upon the pivotal idea that what distinguished human intelligence from its animal counterpart was its innate ability to manipulate symbol systems such as those found in math and logic. The machine's other human propensities included its apparent mimicry of the human thought process during a chess game as it sought to process information over time in a logical manner by decomposing the task into sub-tasks.

The cardinal sin of such a serial-device was its obeisance to the Greek dictum of men as rational beings, and the concomitant fallacy that the mass of men leads a mental life as logical and rule governed as a chess-game. And lest the reader deem himself an acolyte of the perished Greek tradition, a sucker-punch: Four cards, with a number on one side and a letter on the other, are laid out on the table with their faces displaying respectively; E, K, 4 and 7. The given rule is, "If a card has a vowel on one side, it has an even number on the other. One is permitted to turn over two cards to determine the validity of the rule.

Now you perhaps realize that the card marked K is superfluous to the rule and that it is imperative to pick the card marked E. The trick lies with the numbered cards and if your mind is attuned to the vast majority of your brethren you are likely to dump logic for intuition and rush to read between, above and below the lines and conclude that the rule logically implies that even numbered cards have a vowel on the other side and thus fatally plump for 4. The right choice, however, is 7 because while a consonant on the other side would be irrelevant to the rule, a vowel would falsify it.

Surely this is not a conclusive example of human irrationality but computers imprisoned by an absolute contrary faith were indubitably destined to fall short, especially with the rigid programming condemning the machines to endless repetition of processes minus any learning and improvisation.

Quite obviously toy problems like chess did grave injustice to any simulacrum of the human mind and the need was to locate real world issues like perception. Soon came the imaginatively termed 'society of minds' theory, wherein the brain was reckoned to be not a general unambiguous processor of all information but a repository of myriad agents that could selectively handle different kinds of information simultaneously, and thus serial symbolic machines were abandoned in favor of parallel processing machines.

Since computation was achieved by the excitatory and inhibitory interaction among a network of competing cooperating neuron like units that depended upon the statistical properties of the entire ensemble to meet the target, information ceased to inhere in a specific locus and the need for symbolic processing was obviated in favor of direct perception models. There was thus no need of any separate knowledge store, it simply resided in the strength and appropriateness of the connection between simple neurons and dedicated to specific functions.

But will a computer ever learn to discern animals in the shape of clouds, faces in stone or drama in inkblots? Or find the way through a forest? A great deal of human knowledge is biologically learnt and culturally transmitted, something, which a computer cannot mimic.

It would be imprudent to expect artificial intelligence to handle every hue of human behavior and those who sought a facsimile of the human mind in artificial intelligence must resign themselves to the fact that human beings may be an amalgam of several kinds of computers and may indeed be at odds with any computer hitherto designed. There are certain ways in which our thought process has an affinity with computer framework but there are also basic incompatibilities. Paradoxically, there is little semblance the human mind bears to a computer.

No comments: