History and Development

The idea of Artificial Intelligence, albeit in rudimentary form, was present in the minds of many philosophers, mathematicians and authors, throughout time.history writer Pamela McCorduck in her book “Machines Who Think”[1] references the Greek myth of Talos, a man created out of bronze by the god Hephaestus to patrol and protect the beaches of Crete, along with the more famous story of Pandora and her box, also a creature created by Hephaestus, as two of the earliest examples of mythical thinking machines.

In 1936 a landmark achievement was reached and the idea of the modern computer was born with Alan Turing publishing his paper On Computable Numbers[2] . Turing proposed a machine, called today a Turing Machine that is capable of computing anything that is mathematically computable. Turing machines use a tape and as such are capable of executing stored instructions (programs). Turing machines became the basis of what we know as the modern computer.[3]

The idea of Artificial Intelligence was considered to be mere science fiction; however during the 1950s various mathematicians started discussing the idea of learning machines.Modern AI began in the 1950s with the view to solving complex mathematical problems and creating ‘thinking machines’. The field of AI was initially founded to answer the question: is it possible to build a machine that has intelligence, specifically a human level of intelligence.[4]

The first artificial intelligence program (the first program specially engineered to mimic the problem solving skills of a human being) was created in 1955-56 by Herbert Simon, Allen Newell and John Shaw. The program was called the “Logic Theorist”.

The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic conference (Popularly known as the Dartmouth Conference) on the subject. Often known as the father of AI, he developed the LISP programming language which became important in machine learning.

Twenty Six years after the world was introduced to the concept of a turing machine, he wrote another paper titled ‘Computing Machinery and Intelligence’[5]where he proposed a test of a machine’s ability to demonstrate intelligence now known as the Turing Test.

After the so called boom in AI development during the 1950s and 1960s, the development slowed down significantly for two decades. These two decades were popularly known as the AI winter, where funding was reduced and enthusiasm dwindled.

Once the fundamentals of AI were laid down by Alan Turing, John McCarthy, Herbet Simon, Allen Newell and John Shaw, the concept of AI was materialized through several of it’s applications. Some notable developments have been furnished below:

  • 1966: Birth of the first chatbot : The German-American computer scientist Joseph Weizenbaum of the Massachusetts Institute of Technology invents a computer program that communicates with humans. ‘ELIZA’ uses scripts to simulate various conversation partners such as a psychotherapist.
  • 1970: WABOT-1: the first anthropomorphic robot, was built in Japan at Waseda University. Its features included moveable limbs, ability to see, and ability to converse.
  • 1972: AI enters the medical field: Expert systems are computer programs that bundle the knowledge for a specialist field using formulas, rules, and a knowledge database. They are used for diagnosis and treatment support in medicine.
  • 1986: NETtalk speaks: Terrence J. Sejnowski and Charles Rosenberg teach their ‘NETtalk’ program to speak by inputting sample sentences and phoneme chains.
  • 1997: Computer beats world chess champion: The AI chess computer ‘Deep Blue’ from IBM defeats the incumbent chess world champion Garry Kasparov in a tournament.

[1]
https://books.google.co.in/books?id=9NSNDwAAQBAJ&pg=PA338&lpg=PA338&dq=Pamela+McCorduck+in+her+book+%E2%80%9CMachines+Who+Think%E2%80%9D%5B1%5D+references+the+Greek+myth+of+Talos,+a+man+created+out+of+bronze+by+the+god+Hephaestus&source=bl&ots=OVa9Y5Chid&sig=ACfU3U1m4f2PToTeGmUHSLHJlp5JTcw5hw&hl=en&sa=X&ved=2ahUKEwiFy-KvvP_kAhWGbysKHc7MCykQ6AEwAXoECAkQAQ

[2]Turing, Alan Mathison. “On computable numbers, with an application to the Entscheidungsproblem.” Proceedings of the London mathematical society 2.1 (1937): 230-265

[3]Guy-Warwick Evans; “Artificial Intelligence: Where We Came From,  Where We Are Now, and Where We Are Going”, University of Victoria, 2013

[4] Supra; Guy-Warwick Evans

[5] Turing, Alan M. “Computing machinery and intelligence.” Mind 59.236 (1950): 433-460