By Eric Vandebroeck and co-workers

Calling it, a Cargo Cult Wired was one of the first high profile publications that pointed out specific misconceptions surrounding AI, including that intelligence is a single dimension. Most technical people tend to graph intelligence how Nick Bostrom does in his book, Superintelligence, as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of a small animal; at the other end is the high intelligence, of, say, a genius, almost as if intelligence were a sound level in decibels. Of course, it is then straightforward to imagine the extension so that intelligence's loudness continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence, a roar! Way beyond us, and maybe even off the chart.

To which Forbes added, we have fallen into this trap before. We have tried to turn machines into automatons that mimic human behavior; these attempts failed. Machines can do incredible things, but we need to design them in a way that fits their strengths. Today, we struggle with building autonomous cars, intelligent analysis tools, complex computer vision systems, but we are not the first to automate our world. Rewind to the early 1800s, and the challenge of the day was how to build a sewing machine. The world population was increasing, and walking naked was not an option. There were just not enough free hands to make the clothes to meet demand. We had to come up with a better solution for making garments. 

Extensive analysis about AI was put out by UNESCO when it aptly stated that the success of the term AI is sometimes based on a misunderstanding,

Or more recently, MDN explains how AI does not mean data will be fixed automatically, more data will produce better outcomes, or that the AI is ready to create out-of-the-box solutions.

Of course, the Cargo cult idea is not entirely true; AI is possible. As to that, the future of AI is a scientific unknown. The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time, that we have already embarked on the path that will lead to human-level AI and then superintelligence. We have not. The path exists only in our imaginations. Yet the inevitability of AI is so ingrained in popular discussion, promoted by media pundits, thought leaders like Elon Musk, and even many AI scientists (though certainly not all), that arguing against it is often taken as a form of Luddism, or at the very least a shortsighted view of the future of technology and a dangerous failure to prepare for a world of intelligent machines.

The science of AI has uncovered a huge mystery at the heart of intelligence, which no one currently knows how to solve. Proponents of AI have huge incentives to minimize its known limitations. After all, AI is big business, and it's increasingly domi­nant in culture. Yet, the possibilities for future AI systems are limited by what we currently know about intelligence's nature, whether we like it or not. And here we should say it directly: all evidence sug­gests that human and machine intelligence are radically different. The myth of AI insists that the differences are only temporary and that more powerful systems will eventually erase them. Futurists like Ray Kurzweil and philosopher Nick Bostrom, prominent purveyors of the myth, talk not only as if human-level Al were inevitable but also as if, soon after its arrival, superintelligent machines would leave us far behind. The scientific part of the myth assumes that we need only keep “chipping away” at the challenge of general intelli­gence by making progress on narrow intelligence feats, like playing games or recognizing images. This is a profound mistake: success on narrow applications gets us not one step closer to general intelligence. The inferences that systems require for general intelligence, read a newspaper, hold a basic conversation, or become a helpmeet like Rosie the Robot in Tlic Jetsons, cannot be programmed, learned, or engineered with our current knowledge of Al. As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress. Rather, picking low-hanging fruit, Hie jumps to general “common sense” is completely different. There’s no known path from one to the other. No algorithm exists for general intelligence. And we have good reason to be skeptical that such an algorithm will emerge through further efforts on deep learning systems or any other ap­proach popular today. Much more likely, it will require a major scien­tific breakthrough, and no one currently has the slightest idea what such a breakthrough would even look like, let alone the details of get­ting to it.

Mythology about Al is bad because it covers up a scientific mystery in endless talk of ongoing progress. The myth props up belief in inevitable success, but genuine respect for science should bring us back to the drawing board. Pursuing why this is not a good way to follow the smart money or even a neutral stance. It is bad for science, and it is bad for us. Why? One reason is that we are unlikely to get innovation if we choose to ignore a core mystery rather than face up to it. A healthy culture for innovation em­phasizes exploring unknowns, not hyping extensions of existing methods, especially when these methods are in­adequate to take us much further. Mythology about inevitable suc­cess in AI tends to extinguish the very culture of invention necessary for real progress, with or without human-level Al. The myth also encourages resignation to the creep of a machine-land, where the genuine invention is sidelined in favor of futuristic talk advocating current approaches, often from entrenched interests.

While we cannot prove that AI overlords will not one day appear, we can give you a reason to discount the prospects of that scenario seriously. For example, AI culture has simplified ideas about people while expanding ideas about tech­nology; This began with its founder, Alan Turing, and involved under­standable but unfortunate simplifications one could call intelligence errors. Initial errors were magnified into an ideology by Turing’s friend and statistician, I. J. Good introduced the idea of 'ultra intelligence' as the predictable result once human-level AI had been achieved.

Ein Bild, das Elektronik, Schaltkreis enthält.

Automatisch generierte Beschreibung

Between Turing and Good, we see the modern myth of AI take shape. Its development has landed us in an era of what I call technological kitsch, cheap imitations of deeper ideas that cut off intelligent engagement and weaken our culture. Kitsch tells us how to think and how to feel. Tire purveyors of kitsch benefit, while the consumers of kitsch experience a loss. They, we, end up in a shallow world.

The only type of inference, thinking, in other words, that will work for human-level AI (or anything even close to it) is the one we don't know how to program or engineer. The inference problem goes to the heart of the AI debate because it deals directly with intelligence, in people or machines. Our knowledge of the various inference types dates back to Aristotle and other ancient Greeks and has been devel­oped in logic and mathematics. The inference is already de­scribed using formal, symbolic systems like computer programs, so a very dear view of the project of engineering intelligence can be gained by exploring inference. There are three types. Classic AI ex­plored one (deduction), modern AI explores another (induction). The third type (abduction) makes for general intelligence, and, surprise, no one is working on it at all. Finally, since each type of inference is distinct, meaning, one type cannot be reduced to another, we know that failure to build AI systems using the type of inference undergirding general intelligence will fail to progress toward arti­ficial general intelligence or AGI.

The myth one could argue has terrible consequences if taken seriously because it subverts sci­ence. In particular, it erodes a culture of human intelligence and in­vention, which is necessary for the very breakthroughs we will need to understand our own future. Data science (the application of a to “big data" is at best a prosthetic for human ingenuity, which if used correctly can help us deal with our modern data deluge." If used as a replacement for individual intelligence, it tends to chew up investment without delivering results. We explain, in particular, how the myth has negatively affected research in neuroscience, among other recent scientific pursuits. The price we are paying for the myth is too high. Since we have no good scientific reason to believe the myth is true, and every reason to reject it for our own future flour­ishing, we need to rethink the discussion about Al radically.

Turing had made his reputation as a mathematician long before he began writing about A I. In 1936; he published a short mathematical paper on the precise meaning of "computer," which at the time re­ferred to a person working through a sequence of steps to get a defi­nite result (like performing a calculation). In this paper, he replaced the human-computer with the idea of a machine doing the same work. The paper ventured into difficult mathematics. But in its treatment of machines, it did not refer to human thinking or the mind. Ma­chines can run automatically, Turing said, and the problems they solve do not require any “external" help or intelligence. This external intelligence, the human factor, is what mathematicians sometimes call “intuition.”

Turing’s 1956 work on computing machines helped launch com­puter science as a discipline and was an important contribution to mathematical logic. Still, Turing apparently thought that his early defi­nition missed something essential. In fact, the same idea of the mind or human faculties assisting problem-solving appeared two years later in his Ph.D. thesis, a clever but ultimately unsuccessful attempt to bypass the Austrian-born mathematical logician Kurt's result Gödel. 

Though his language is framed for specialists, Turing is pointing out the obvious: mathematicians typically select problems or “see" an interesting problem to work on using some capacity that at least seems indivisible into steps, and therefore not obviously amenable to com­puter programming.

Gödel, too, was thinking about mechanical intelligence. Like Turing, he was obsessed with the distinction between ingenuity (mechanics) and intuition (mind). His distinction was essentially the same as Turing's, in a different language: proof versus truth (or “proof-theory" versus “model-theory" in mathematics lingo). Are the concepts of proof and truth, Gödel wondered, in the end, the same? If so, mathe­matics and even science itself might be understood purely mechani­cally. Human thinking, in this view, would be mechanical, too. The concept of AI, though the term remained to be coined, hovered above the question. Is the mind’s intuition, ability to grasp truth and meaning reducible to a machine, to computation?

This was Gödel's question. In answering it, he ran into a snag that would soon make him world-famous. In 1931, Gödel published two theorems of mathematical logic known as his incompleteness theo­rems. In them, he demonstrated the inherent limitations of all formal mathematical systems. It was a brilliant stroke. Gödel showed unmis­takably that mathematics, all of mathematics, with certain straight­forward assumptions, is, strictly speaking, not mechanical or for- realizable. More specifically, Gödel proved that there must exist some statements in any formal (mathematical or computational) system that are True, with capital-T standing, yet not provable in the system itself using any of its rules. The True statement can be recog­nized by a human mind but is (probably) not provable by the system it's formulated in.

How did Gödel reach this conclusion? Hie details are complicated and technical, but Gödel’s basic idea is to treat a mathematical system complicated enough to do addition as a system of meaning, almost like a natural language such as English or German. The same applies to all more complicated systems. By treating it this way, we enable the system to talk about itself. It can say about itself, for instance, that it has certain limitations. This was Gödel's insight.

Formal systems like those in mathematics allow for the precise ex­pression of truth and falsehood. Typically, we establish a truth by using the tools of proof, we use rules to prove something, so we know it’s definitely true. But are there true statements that can't be proven? Can the mind know things the system cannot? In the simple arithmetic case, we express truths by writing equations like “2 + 1 = 4." Ordinary equations are true statements in the system of arithmetic, and they are provable using the rules of arithmetic. Here, provable equals true. Mathematicians before Gödel thought all of the mathematics had this property. This implied that machines could crank out all truths in different mathematical systems by simply applying the rules correctly. It’s a beautiful idea. It's just not true.

Gödel hit upon the rare but powerful property of self-reference. Mathematical versions of self-referring expressions, such as “This statement is not provable in this system,” can be constructed without breaking the mathematical systems' rules. But the so-called self-referring “Gödel statements” introduce contradictions into mathe­matics: if they are true, they are improvable. If they are false, then because they say they are improvable, they are actually true. True means false, and false means true, a contradiction.

Going back to the concept of intuition, we humans can see that the Gödel statement is, in fact, true, but because of Gödel's result, we also know that the rules of the system can’t prove it, the system is in effect blind to something not covered by its rules. Truth and provability pull apart. Perhaps mind and machine do, as well. The purely formal system has limits, at any rate. It cannot prove in its own language something true. In other words, we can see something that the computer cannot.

Gödel’s result dealt a massive blow to a popular idea at the time that all of mathematics could be converted into rule-based opera­tions, cranking out mathematical truths one by one. The Zeitgeist was formalism, not talk of minds, spirits, souls, and the like. The formalist movement in mathematics signaled a broader turn by intellectuals toward scientific materialism, and in particular, logical positivism, a movement dedicated to eradicating traditional metaphysics tike Pla­tonism, with its abstract Forms that couldn't be observed with the senses, and traditional notions in religion like the existence of God. The world was turning to the idea of precision machines, in effect. And no one took up the formalist cause as vigorously as the German mathematician David Hilbert.

At the outset of the twentieth century (before Gödel), David Hilbert had issued a challenge to the mathematical world: show that all of the mathematics rested on a secure foundation. Hilbert's worry was un­derstandable. If the purely formal rules of mathematics can’t prove any truths, it's at least theoretically possible for mathematics to disguise contradictions and nonsense. A contradiction buried some­where in mathematics ruins everything because, from a contradic­tion, anything can be proven. Formalism then becomes useless.

Hilbert expressed all formalists' dream to prove finally that mathematics is a closed system governed only by rules. Truth is just "proof.’’ We acquire knowledge by simply tracing the "code” of proof and confirming no rules were violated. The larger dream, thinly dis­guised, was really a worldview, a picture of the universe as itself a mechanism. 

 

Condensing the history of the AI myth

The story of artificial intelligence starts with the ideas of someone who had immense human intelligence: the computer pioneer Alan Turing.

In 1950 Turing published a provocative paper, “Computing Ma­chinery and Intelligence," about the possibility of intelligent machines. The paper was bold, coming when computers were new and unimpressive by today’s standards. Slow, heavy pieces of hardware sped up scientific calculations like code-breaking. After much preparation, they could be fed physical equations and initial conditions and crank out the radius of a nuclear blast. IBM quickly grasped their potential for replacing humans doing calculations for businesses, like updating spreadsheets. But viewing computers as "thinking" took imagination.

Turing’s proposal was based on a popular entertainment called the "imitation game," In the original game, a man and a woman are hidden from view. A third person, the interrogator, relays questions to one of them at a time and, by reading the answers, attempts to deter­mine which is the man and which the woman. 'The twist is that the man has to try to deceive the interrogator while the woman tries to assist him, making replies from either side suspect. Turing replaced the man and woman with a computer and a human. Titus began what we now call the Turing test: a computer and a human receive typed questions from a human judge, and if the judge can't accurately iden­tify which is the computer, the computer wins. Turing argued that with such an outcome, we have no good reason to define the machine as unintelligent, regardless of whether it is human or not. Thus, the question of whether a machine has intelligence replaces the question of whether it can truly think.

The Turing test is actually tough; no computer has ever passed it. Turing, of course, didn't know this long-term result in 1950; however, by replacing pesky philosophical questions about “conscious­ness” and "thinking” with a test of observable output, he encouraged the view of AI as a legitimate science with a well-defined aim. 

In the course of its short existence, AI has undergone many changes. These can be summarized in six stages.

First of all, in the euphoria of AI’s origins and early successes, the researchers had given free rein to their imagination, indulging in certain reckless pronouncements for which they were heavily criticized later. For instance, in 1958, American political scientist and economist Herbert A. Simon, who received the Nobel Prize in Economic Sciences in 1978 – had declared that, within ten years, machines would become world chess champions if they were not barred from international competitions.

By the mid-1960s, progress seemed to be slow in coming. A 10-year-old child beat a computer at a chess game in 1965, and a report commissioned by the US Senate in 1966 described the intrinsic limitations of machine translation. AI got bad press for about a decade.

The work went on nevertheless, but the research was given a new direction. It focused on the psychology of memory and the mechanisms of understanding, with attempts to simulate these on computers – and the role of knowledge in reasoning. This gave rise to techniques for the semantic representation of knowledge, which developed considerably in the mid-1970s and led to expert systems development, so-called because they use skilled specialists to reproduce their thought processes. Expert systems raised enormous hopes in the early 1980s with many applications, including medical diagnosis.

Technical improvements led to the development of machine learning algorithms, which allowed computers to accumulate knowledge and automatically reprogram themselves using their own experiences.

This led to industrial applications' development (fingerprint identification, speech recognition, etc.), where AI, computer science, artificial life, and other disciplines were combined to produce hybrid systems.

Starting in the late 1990s, AI was coupled with robotics and human-machine interfaces to produce intelligent agents that suggested the presence of feelings and emotions. This gave rise, among other things, to the calculation of emotions (affective computing), which evaluates the reactions of subject feeling emotions and reproduces them on a machine, and especially to the development of conversational agents (chatbots).

Since 2010, machines' power has made it possible to exploit enormous data quantities (big data) with deep learning techniques based on formal neural networks. A range of very successful applications in several areas, including speech and image recognition, natural language comprehension, and autonomous cars, leads to an AI renaissance. 

 

Some of the applications

Many achievements using AI techniques surpass human capabilities, in 1997, a computer program defeated the reigning world chess champion. More recently, in 2016, other computer programs have beaten the world’s best Go [an ancient Chinese board game] players and some top poker players. Computers are proving or helping to prove mathematical theorems; knowledge is being automatically constructed from huge masses of data, in terabytes (1012 bytes), or even petabytes (1015 bytes), using machine learning techniques.

As a result, machines can recognize speech and transcribe it, just like typists did in the past. Computers can accurately identify faces or fingerprints from tens of millions or understand texts written in natural languages. Using machine learning techniques, cars drive themselves; machines are better than dermatologists at diagnosing melanomas using photographs of skin moles taken with mobile phone cameras; robots are fighting wars instead of humans, and factory production lines are becoming increasingly automated.

Scientists are also using AI techniques to determine certain biological macromolecules' function, especially proteins and genomes, from their constituents' sequences, amino acids for proteins, bases for genomes. All the sciences are undergoing a major epistemological rupture with in silico experiments. They are named so because they are carried out by computers from massive quantities of data, using powerful processors whose cores are silicon. In this way, they differ from in vivo experiments performed on living matter, and above all, from in vitro experiments carried out in glass test tubes.

Today, AI applications affect almost all activity fields, particularly in banking, insurance, health, and defense sectors. Several routine tasks are now automated, transforming many trades and eventually eliminating some.

The point is that AI requires improvements in historically confounding areas, like the link between the brain and intelligence. Progress in areas with historical growth (such as processing power and neural network size) does not guarantee ultra-intelligence at all.

Betting against a catastrophic end to the world has been right so far. On the other hand is Elan Musk, Kurzweil, et al. and they have some impressive street cred. After all, Kurzweil rightly predicted the successful timeline of the human genome project, and not many would bet against Musk’s engineering teams achieve.

 

Continued in Part Two:

 

For updates click homepage here

 

 

 

 

shopify analytics