A subjective guide for your artificial intelligence journey

Thinking robot. Clipping path included. 3D illustration

What is Artificial Intelligence?
Artificial intelligence (AI) is wide-ranging branch of computing concerned with building smart machines capable of performing tasks that typically require human intelligence. AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry.

HOW DOES AI WORK?
Can machines think? — Turing , 1950

Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win war II, mathematician Turing changed history a second time with an easy question: “Can machines think?”

Turing’s paper “Computing Machinery and Intelligence” (1950), and it’s subsequent Turing Test, established the elemental goal and vision of AI .

what is artificial intelligenceAt it’s core, AI is that the branch of computing that aims to answer Turing’s question within the affirmative. it’s the endeavor to duplicate or simulate human intelligence in machines.

The expansive goal of AI has given rise to several questions and debates. such a lot so, that no singular definition of the sector is universally accepted.

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what AI is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: a contemporary Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work round the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.” (Russel and Norvig viii)

Norvig and Russell continue to explore four different approaches that have historically defined the sector of AI:

Thinking humanly
Thinking rationally
Acting humanly
Acting rationally


The first two ideas concern thought processes and reasoning, while the others affect behavior. Norvig and Russell focus particularly on rational agents that act to realize the simplest outcome, noting “all the talents needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of AI and computing at MIT, defines AI as “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”

While these definitions could seem abstract to the typical person, they assist focus the sector as a neighborhood of computing and supply a blueprint for infusing machines and programs with machine learning and other subsets of AI .

While addressing a crowd at the Japan AI Experience in 2017, DataRobot CEO Jeremy Achin began his speech by offering the subsequent definition of how AI is employed today:

“AI may be a computing system ready to perform tasks that ordinarily require human intelligence… Many of those AI systems are powered by machine learning, a number of them are powered by deep learning and a few of them are powered by very boring things like rules.”

HOW IS AI USED?
Artificial intelligence generally falls under two broad categories:

Narrow AI: Sometimes mentioned as “Weak AI,” this type of AI operates within a limited context and may be a simulation of human intelligence. Narrow AI is usually focused on performing one task extremely well and while these machines could seem intelligent, they’re operating under much more constraints and limitations than even the foremost basic human intelligence.

SEE ALSO  Top tips to sell over phone

Artificial General Intelligence (AGI): AGI, sometimes mentioned as “Strong AI,” is that the quite AI we see within the movies, just like the robots from Westworld or Data from Star Trek: subsequent Generation. AGI may be a machine with general intelligence and, very similar to a person’s being, it can apply that intelligence to unravel any problem.


ARTIFICIAL INTELLIGENCE EXAMPLES
Smart assistants (like Siri and Alexa)
Disease mapping and prediction tools
Manufacturing and drone robots
Optimized, personalized healthcare treatment recommendations
Conversational bots for marketing and customer service
Robo-advisors for stock trading
Spam filters on email
Social media monitoring tools for dangerous content or false news
Song or television program recommendations from Spotify and Netflix
Narrow AI
Narrow AI is all around us and is definitely the foremost successful realization of AI so far . With its specialise in performing specific tasks, Narrow AI has experienced numerous breakthroughs within the last decade that have had “significant societal benefits and have contributed to the economic vitality of the state ,” consistent with “Preparing for the longer term of AI ,” a 2016 report released by the Obama Administration.

A few samples of Narrow AI include:

Google search
Image recognition software
Siri, Alexa and other personal assistants
Self-driving cars
IBM’s Watson
Machine Learning & Deep Learning
Much of Narrow AI is powered by breakthroughs in machine learning and deep learning. Understanding the difference between AI , machine learning and deep learning are often confusing. speculator Frank Chen provides an honest overview of the way to distinguish between them, noting:

“Artificial intelligence may be a set of algorithms and intelligence to undertake to mimic human intelligence. Machine learning is one among them, and deep learning is one among those machine learning techniques.”

Simply put, machine learning feeds a computer data and uses statistical techniques to assist it “learn” the way to get progressively better at a task, without having been specifically programmed for that task, eliminating the necessity for many lines of written code. Machine learning consists of both supervised learning (using labeled data sets) and unsupervised learning (using unlabeled data sets).

Deep learning may be a sort of machine learning that runs inputs through a biologically-inspired neural specification . The neural networks contain variety of hidden layers through which the info is processed, allowing the machine to travel “deep” in its learning, making connections and weighting input for the simplest results.

Artificial General Intelligence
The creation of a machine with human-level intelligence which will be applied to any task is that the grail for several AI researchers, but the search for AGI has been fraught with difficulty.

The look for a “universal algorithm for learning and acting in any environment,” (Russel and Norvig 27) isn’t new, but time hasn’t eased the problem of essentially creating a machine with a full set of cognitive abilities.

SEE ALSO  Steps to figure out the right career for you

AGI has long been the muse of dystopian fantasy during which super-intelligent robots overrun humanity, but experts agree it isn’t something we’d like to stress about anytime soon.

HISTORY OF AI


Intelligent robots and artificial beings first appeared within the Ancient Greek myths of Antiquity. Aristotle’s development of the syllogism and it’s use of deduction was a key moment in mankind’s quest to know its own intelligence. the subsequent may be a quick check out a number of the foremost important events in AI.

1943

Warren McCullough and Walter Pitts publish “A Logical Calculus of Ideas Immanent in Nervous Activity.” The paper proposed the primary mathematic model for building a neural network.
1949
In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the idea that neural pathways are created from experiences which connections between neurons become stronger the more frequently they’re used. Hebbian learning continues to be a crucial model in AI.
1950
Alan Turing publishes “Computing Machinery and Intelligence, proposing what’s now referred to as the Turing Test, a way for determining if a machine is intelligent.
Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the primary neural network computer.
Claude Shannon publishes the paper “Programming a Computer for enjoying Chess.”
Isaac Asimov publishes the “Three Laws of Robotics.”
1952
Arthur Samuel develops a self-learning program to play checkers.
1954
The Georgetown-IBM MT experiment automatically translates 60 carefully selected Russian sentences into English.
1956
The phrase AI is coined at the “Dartmouth Summer scientific research on AI .” Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of AI as we all know it today.
Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the primary reasoning program.
1958
John McCarthy develops the AI programing language Lisp and publishes the paper “Programs with sense .” The paper proposed the hypothetical Advice Taker, an entire AI system with the power to find out from experience as effectively as humans do.
1959
Allen Newell, Herbert Simon and J.C. Shaw develop the overall solver (GPS), a program designed to imitate human problem-solving.
Herbert Gelernter develops the Geometry Theorem Prover program.
Arthur Samuel coins the term machine learning while at IBM.
John McCarthy and Marvin Minsky found the MIT AI Project.
1963
John McCarthy starts the AI Lab at Stanford.
1966
The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the shortage of progress in machine translations research, a serious conflict initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report results in the cancellation of all government-funded MT projects.
1969
The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.
1972
The logic programing language PROLOG is made .
1973
The “Lighthill Report,” detailing the disappointments in AI research, is released by British government and results in severe cuts in funding for AI projects.
1974-1980
Frustration with the progress of AI development results in major DARPA cutbacks in academic grants. Combined with the sooner ALPAC report and therefore the previous year’s “Lighthill Report,” AI funding dries up and research stalls. this era is understood because the “First AI Winter.”
1980
Digital Equipment Corporations develops R1 (also referred to as XCON), the primary successful commercial expert system. Designed to configure orders for brand spanking new computer systems, R1 kicks off an investment boom in expert systems which will last for much of the last decade , effectively ending the primary “AI Winter.”
1982
Japan’s Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.
1983
In response to Japan’s FGCS, the U.S. government launches the Strategic Computing Initiative to supply DARPA funded research in advanced computing and AI .
1985
Companies are spending quite a billion dollars a year on expert systems and a whole industry referred to as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programing language Lisp.
1987-1993
As computing technology improved, cheaper alternatives emerged and therefore the Lisp machine market collapsed in 1987, introduction the “Second AI Winter.” During this era , expert systems proved too expensive to take care of and update, eventually rupture of favor.
Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier.
DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far in need of expectations.
1991
U.S. forces deploy DART, an automatic logistics planning and scheduling tool, during the Gulf War.
1997
IBM’s Deep Blue beats world chess champion Kasparov
2005
STANLEY, a self-driving car, wins the DARPA Grand Challenge.
The U.S. military begins investing in autonomous robots like Boston Dynamic’s “Big Dog” and iRobot’s “PackBot.”
2008
Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.
2011
IBM’s Watson trounces the competition on Jeopardy!.
2012
Andrew Ng, founding father of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to acknowledge a cat without being told what a cat is, introduction breakthrough era for neural networks and deep learning funding.
2014
Google makes first self-driving car to pass a state driving test.
2016
Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the traditional Chinese game was seen as a serious hurdle to clear in AI

SEE ALSO  The art of seduction why it is important and how to play it well.
Leave a Reply

Leave a Reply

%d bloggers like this: