Quick History of Artificial Intelligence

qurasofficial
4 min readJul 12, 2019

You may be surprised by the fact that Artificial intelligence (AI) is far older than most people imagine. Fantasies about artificial human-like beings go back to antiquity, while first computerized machines were developed shortly after World War II. It is just a popular misconception that AI is brand new.

The Beginnings of AI

Humans have long fantasized on artificial beings. In the “Iliad,” Homer wrote of mechanical tripods serving dinner to the Gods. Most famously, Mary Shelley’s “Frankenstein” is about a humanoid that destroys its creator. Years later, Jules Verne and Isaac Asimov wrote about robots, as did L. Frank Baum of the “Wizard of Oz.”

Philosophers likewise theorized on artificially created life forms. Leibniz and Pascal created the first calculating machines, while the Abbe de Condillac devised a humanoid with no memory or consciousness, into which scientists injected sensations. In 1920, Czech author Karel Capek coined the term “robot” in his play “R.U.R.,” where factories cranked out artificial people that, ultimately, extinguished the human race.

While these may not exactly qualify as “artificial intelligence” as we know it today, they all show humanity’s long history of fantasizing about creating autonomous beings with the purpose of helping humans.

WWII and AI

Six months after WWII ended, the University of Pennsylvania released the world’s first programmable computer, called ENIAC (for Electronic Numerical Integrator and Calculator). The so-called “giant brain” was enormous. It filled a long basement room — and weighed in tons. The machine calculated arithmetic about one thousand times faster than contemporary calculating devices and excited the press because it performed 20,000 multiplications per second.

In those exciting days of early computing, AI’s researchers typed scores of articles on “intelligent machinery.” Of all the scientists, John von Neumann, Alan Turing, and Claude Shannon stood out for their philosophical and technical contributions. Each went on to play a critical role in developing AI. Shannon became known as the “father of information theory.”

Gradually, an artificial intelligence developed to the point where it subdivided into different fields. And it was at AI’s first official event — the Dartmouth Conference in 1956 — that researcher John McCarthy labeled this new field as “artificial intelligence.”

DARPA’s Involvement

It was the generosity of the US Defense Department’s Advanced Research Projects Agency (DARPA) that actually got technology rolling.

June 1963, DARPA unleashed a $2.2 million donation for “machine-aided cognition.” In 1968, SYSTRAN produced its first Russian-English machine translation for the US Air Force. Other AI innovations of that period included a program that arranged colored, differently shaped blocks with a robotic hand. Computers like STUDENT, SAINT, and ANALOGY tackled arithmetic and logic. An English-Russian translation machine intended to promote world peace amazed visitors at New York World’s Fair in 1964.

Famously, there was also SIR, that seemed to understand basic sentences. The computer was programmed on the reasoning of the human brain but was a far cry from nowadays Siri, Cortana, Bixby, and other helpers.

AI’s First Problems

In 1965, Herbert Simon said that in just twenty years’ time, machines would be capable “of doing any work a man can do.” Marvin Minsky added that “within a generation, the problem of creating Artificial Intelligence will substantially be solved.” However, five years later, the money dried up, as Nixon abolished the Office of Science and Technology and slashed budgets across applied research.

Thankfully, big business brought AI back to life with computer programs that worked side by side with people and helped industrialists improve their work.

This new AI field made its investors as wealthy as personal computing made Bill Gates and Steve Jobs.

And then for the second time money dried up.

The Rise of Modern AI

AI’s second series of problems were worse than its first. Two of AI’s leading expert system companies — Teknowledge and Intellicorp — lost millions of dollars in 1987. Other AI companies filed for bankruptcy.

And then, a decade later, after the invention of the search engine, things started to move again.

Over the next dozen years, Google took the lead and launched numerous products and has experimented widely with AI, making numerous of its projects freely available so that anyone can contribute to.

By 2019, AI had reached, what the New York Times called, “a frenzy.” By layering itself on complex machine learning algorithms, AI has produced incredible inventions that have included wearable devices for healthier lives, autonomous vehicles, smarter computers, home automation, and many other innovations.

Lately, AI researchers differentiate between “weak AI” and “strong AI.” The first is AI as we know it — a program such as Siri that operates on a narrowly defined problem. The second is the not-yet-existent category that tends to frighten some people. It is where the machine can perform general intelligent actions — and if we take it to the extreme, experience consciousness.

Conclusion

All things considered, AI is older than many things. Including pizza! AI has shifted a lot as the ages changed. Still, it was always in the human consciousness. The desire to build an artificial mind which could help us propel our civilization forward. It remains to be seen when exactly we will reach the turning point and finally see “strong AI” in action.

--

--