History of AI

History of AI

Origin of Artificial Intelligence

The idea of inanimate objects coming to life as intelligent beings has been around for a long time. The ancient Greeks had myths about robots, Jewish faith has the idea of the clay golem that would come to life to protect them in times of need and the alchemist Paracelsus wrote about how to create the “homunculus” or “small human being.” But what was once myth and legend might not be so unrealistic anymore.

The earliest research into thinking machines became prevalent during the late 1930s and 1940s, since recent research in neurology had shown that the brain was an electrical work of neurons that worked in pulses. Studies at the time therefore theorized that it might be possible to construct an electronic brain. The concept of artificial intelligence was starting to take form.

AI is based off the assumption that human thoughts and their process can be mechanized. This idea and the study of it has been present for centuries and developed by philosophers throughout, but it mainly started since the invention of the first computers. The first modern computers were the code-breaking machines used during the Second World War, like the Z3, ENIAC and the Colossus. No one can refute a computer’s ability to process logic, but could a machine think? What does it mean to think?

I propose to consider the question: Can machines think?” was the opening statement in Alan Turing’s paper “Computing Machinery and Intelligence”. Turing recognized that “thinking” was difficult to define, so he created the “Turing test.” The idea was that participants would unknowingly converse with a machine and if they could not tell the difference between man and machine, the machine would’ve passed the test. It was the first serious proposal in the philosophy of Artificial Intelligence.

These developments eventually led to the Dartmouth Conference in 1956, where the term AI was born and accepted as the name of the field. It featured many important scientists, all of whom would create important AI programs in the years after the conference. It also allowed for the debut of Logic Theorist, generally considered to be the first AI-program, which was able to prove 38 of the first 52 theorems in “Principia Mathematica”and even finding some new evidence for some. The AI field had gained its name, its mission and gotten major involvement from scientists.

Mariana Xavier

Evolution of Artificial Intelligence

The history of artificial intelligence has been growing along with myths, rumors, and stories.

The computer was created in the 1940s, based on mathematical reasoning. Because of this, scientists started to believe that someday they could create an electronic brain.

From 1956 till 1974 there was a “golden era”. An era of discovery and experiments. Later, computers were solving algebra word problems, proving theorems in geometry and learning to speak english.

The problems started in 1974. There was limited computer power because there was not enough memory or processing speed to accomplish anything truly useful.

They figured out that no scientist in 1970 could build a database large enough and no one knew how a program might learn so much information.

Theorems and geometry problems were easy for computers, but a task like recognizing a face or crossing a room was difficult. This was called the Moravec’s paradox.

In the 1980s there was a boom. A form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of AI research. In those same years, the Japanese government funded AI with its fifth-generation computer project.

The prestige of artificial intelligence became bigger in 1997 when the Deep Blue computer defeated the world chess champion, Garry Kasparov. Can you imagine? A computer defeating a human. Deep Blue had a lot of power, evaluating 200 million moves per second. Humans can examine only about 50 movements. The computer wasn’t even thinking about strategy and learning while playing like systems do now.

In 2011, scientists from around the world developed neural networks, like the ones we have in our brains. That year, Jeff Dean, a Google engineer, met Andrew Ng, a professor of computer science at Stanford. The two built a large neural network. For three days, a 16,000-processor neural network analyzed 10 million YouTube screenshots. He then showed three blurry images representing the visual patterns that were in the test images: a human face, a human body, and a cat.

In 2012, at the University of Toronto, a professor and two students created AlexNet, a neural network model to compete in an image recognition contest called ImageNet. Participants should use their systems to process millions of test images and identify them. They won the contest. In 2013, British researchers published an article showing how they could use a neural network to play and win 50 games. In 2016, they developed a neural network model called AlphaGo learn from play. It learned from win and lose strategies.

Recent advances have such a wide impact that the true history of artificial intelligence may be just beginning. 

Márcia Valente

Types of Artificial Intelligence

We can say that there are mainly 2 types, Type-1 and Type-2, being the first based on capabilities and the other based on functionality.

In the Type-1, AI can be termed as Weak AI or Narrow AI, General AI or Super AI.

Weak AI or Narrow AI is an intelligence which is able of operating a task but can not operate beyond its limitations, as it is only qualified for one specific task. If it surpasses the limits this AI can fail in uncertain ways. This one is considered the most common and currently available. One example can be image recognition.

General AI is a type of intelligence which could achieve any intellectual task performing like a human. The researchers are trying to develop this system to make it smarter and to think like a human on its own. Systems with general AI are still under investigation and it will take time and commitment to establish such a system.

Super AI is a level at which machines could exceed human intelligence and do any task better than humans. In this intelligence level the skill of thinking, learning, planning and communicating on its own are incorporated.

In the Type-2 AI can be designated Reactive Machines, Limited Memory, Theory of Mind or Self-Awareness.

Reactive Machines is the most basic type of AI. Machines have extremely limited capability, this kind of system does not save memories or past experiences that can be needed for future actions, this means such machines cannot use previous data for current actions.

Limited Memory AI can store data or past experiences but only for a short period of time – it is capable of learning from historical data – they also can use these data for a limited period. One example can be self-driving cars because they can store, for example, distance to other cars, speed limit in the streets they are travelling and other information that may be important when crossing the road.

Theory of Mind machines are still not developed, but they will be able to understand human emotions, beliefs, thought process and even interact like humans (socially). All the machines will understand humans as individuals and “understand” them.

Self-aware AI still is a theoretical concept. These machines will not only understand humans, but also will have emotions (sentiments), beliefs, needs and potentially desire of their own. They will be smarter than humans, capable of having ideas and, as the name says, they will have self-awareness.

Maria Coelho

Design a site like this with WordPress.com
Get started