- Herbert Simon, 1983 - Learning denotes changes in a system that enable a system to do the same task more efficiently the next time.
- Marvin Minsky, 1986 - Learning is making useful changes in the workings of our minds.
- Ryszard Michalski, 1986 - Learning is constructing or modifying representations of what is being experienced.
- Mitchell, 1997 - A computer program is said to learn from experience E with respect to
some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
1. What is Learning ?
Learning denotes changes in a system that enable the system to dosame task more efficiently next time. Learning is an important feature of “Intelligence”.
Learning denotes changes in a system that enable the system to dosame task more efficiently next time. Learning is an important feature of “Intelligence”.
1.1 Definition:
A computer program is said to learn from experience E with respect some class of tasks T and performance measure P, if its performance tasks in T, as measured by P, improves with experience E. (Mitchell 1997)
This means :
Given : A task T
A performance measure P
Some experience E with the task
Goal : Generalize the experience in a way that allows to improve your performance on the task.
Some experience E with the task
Goal : Generalize the experience in a way that allows to improve your performance on the task.
Why do you require Machine Learning ?
■ Understand and improve efficiency of human learning.
■ Discover new things or structure that is unknown to humans.
■ Fill in skeletal or incomplete specifications about a domain
■ Discover new things or structure that is unknown to humans.
■ Fill in skeletal or incomplete specifications about a domain
1.2 Learning Agents.
An agent is an entity that is capable of perceiving and do action. An agent can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.
In computer science an agent is a software agent that assists users and acts in performing computer-related tasks.
In computer science an agent is a software agent that assists users and acts in performing computer-related tasks.
1.3 Components of a Learning System:
■ Performance Element: The Performance Element is the agent itself that acts in the world. It takes in percepts and decides on external actions.
■ Learning Element: It responsible for making improvements, takes knowledge about performance element and some feedback, determines how to modify performance element.
■ Critic: Tells the Learning Element how agent is doing (success or failure) by comparing with a fixed standard of performance.
■ Problem Generator: Suggests problems or actions that will generate new examples or experiences that will aid in training the system further.
■ Critic: Tells the Learning Element how agent is doing (success or failure) by comparing with a fixed standard of performance.
■ Problem Generator: Suggests problems or actions that will generate new examples or experiences that will aid in training the system further.
2. Paradigms of Machine Learning:
■Rote Learning: Learning by memorization; One-to-one mapping from inputs to stored representation; Association-based storage and retrieval.
■Induction: Learning from examples; A form of supervised learning, uses specific examples to reach general conclusions; Concepts are learned from sets of labeled instances.
■Clustering: Discovering similar group; Unsupervised, Inductive learning in which natural classes are found for data instances, as well as ways of classifying them.
■Analogy: Determine correspondence between two different representations that come from Inductive learning in which a system transfers knowledge from one database into another database of a different domain.
■Analogy: Determine correspondence between two different representations that come from Inductive learning in which a system transfers knowledge from one database into another database of a different domain.
■ Discovery: Learning without the help from a teacher; Learning is both inductive and deductive. It is deductive if it proves theorems and discovers concepts about those theorems. It is inductive when it raises conjectures (guess). It is unsupervised, specific goal not given.
■ Genetic Algorithms:Inspired by natural evolution; In the natural world, the organisms that are poorly suited for an environment die off, while those well-suited for it prosper. Genetic algorithms search the space of individuals for good candidates. The "goodness" of an individual is measured by some fitness function. Search takes place in parallel, with many individuals in each generation.
■ Reinforcement: Learning from feedback (+ve or -ve reward) given at end of a sequence of steps. Unlike supervised learning, the reinforcement learning takes place in an environment where the agent cannot directly compare the results of its action to a desired result. Instead, it is given some reward or punishment that relates to its actions. It may win or lose a game, or be told it has made a good move or a poor one. The job of reinforcement learning is to find a successful function using these rewards.
2.1. Rote Learning: Rote learning technique avoids understanding the inner complexities but focuses on memorizing the material so that it can be recalled by thelearner exactly the way it was read or heard.
• Learning by Memorization which avoids understanding the inner complexities the subject that is being learned; Rote learning instead focuses on memorizing the material so that it can be recalled by the learner exactly the way it was read or heard.
• Learning something by Repeating over and over and over again; saying the same thing and trying to remember how to say it; it does not help us to understand; it helps us to remember, like we learn a poem, or a song, or something like that by rote learning.
• Learning by Memorization which avoids understanding the inner complexities the subject that is being learned; Rote learning instead focuses on memorizing the material so that it can be recalled by the learner exactly the way it was read or heard.
• Learning something by Repeating over and over and over again; saying the same thing and trying to remember how to say it; it does not help us to understand; it helps us to remember, like we learn a poem, or a song, or something like that by rote learning.
2.2. Learning from Example, Induction: A process of learning by example. The system tries to induce a general rule from a set of observed instances. The learning methods extract rules and patterns out of massive data sets. The learning processes belong to supervised learning, does classification and constructs class definitions, called induction or concept learning.
The techniques used for constructing class definitions (or concept leaning) are :
• Winston's Learning program
The techniques used for constructing class definitions (or concept leaning) are :
• Winston's Learning program
• Version Spaces
• Decision Trees
2.3. Learning by Discovery:- Simon (1966) first proposed the idea that we might explain scientific discovery in computational terms and automate the processes involved on a computer.
Project DENDRAL (Feigenbaum 1971) demonstrated this by inferring structures of organic molecules from mass spectra, a problem previously solved only by experienced chemists.
Later, a knowledge based program called AM the Automated Mathematician (Lenat 1977) discovered many mathematical concepts.
Project DENDRAL (Feigenbaum 1971) demonstrated this by inferring structures of organic molecules from mass spectra, a problem previously solved only by experienced chemists.
Later, a knowledge based program called AM the Automated Mathematician (Lenat 1977) discovered many mathematical concepts.
After this, an equation discovery systems called BACON (Langley, 1981)discovered a wide variety of empirical laws such as the ideal gas law. The research continued during the 1980s and 1990s but reduced because the computational biology, bioinformatics and scientific data mining have convinced many researchers to focus on domain-specific methods. But need for research on general principles for scientific reasoning and discovery very much exists.
Discovery system AM relied strongly on theory-driven methods of discovery. BACON employed data-driven heuristics to direct its search for empirical laws. These two discovery programs are illustrated in the next few slides.
Discovery system AM relied strongly on theory-driven methods of discovery. BACON employed data-driven heuristics to direct its search for empirical laws. These two discovery programs are illustrated in the next few slides.
2.3.1. Theory Driven Discovery : The Simon's theory driven science, means AI-modeling for theory building. It starts with an existing theory represented in some or all aspects in form of a symbolic model and one tries to transform the theory to a runable program. One important reason for modeling a theory is scientific discovery in the theory driven approach, this means the discovery of new theoretical conclusions, gaps, or inconsistencies. Many computational systems have been developed for modeling different types of discoveries. The Logic Theorist (1956) was designed to prove theorems in logic when AI did not exist. Among the more recent
systems, the Automated Mathematician AM (Lenat, 1979) is a good example in modeling mathematical discovery.
systems, the Automated Mathematician AM (Lenat, 1979) is a good example in modeling mathematical discovery.
• AM (Automated Mathematician)
AM is a heuristic driven program that discovers concepts in elementary
mathematics and set theory. AM has 2 inputs:
(a) description of some concepts of set theory: e.g. union, intersection;
(b) information on how to perform mathematics. e.g. functions.
AM have successively rediscovered concepts such as :
(a) Integers , Natural numbers, Prime Numbers;
(b) Addition, Multiplication, Factorization theorem ;
(c) Maximally divisible numbers, e.g. 12 has six divisors 1, 2, 3, 4, 6, 12.
2.3.2. Data Driven Discovery: Data driven science, in contrast to theory driven, starts with empirical data or the input-output behavior of the real system without an explicitly given theory. The modeler tries to write a computer program which generates the empirical data or input-output behavior of the system. Typically, models are produced in a generate-and-test-procedure. Generate-and-test means writing program code which tries to model the i-o-behavior of the real
system first approximately and then improve as long as the i-o-behaviordoes not correspond to the real system. A family of such discovery models are known as BACON programs.
system first approximately and then improve as long as the i-o-behaviordoes not correspond to the real system. A family of such discovery models are known as BACON programs.
• BACON System:-
Equation discovery is the area of machine learning that develops methods for automated discovery of quantitative laws, expressed in the form of equations, in collections of measured data. BACON is pioneer among equation discovery systems.
Equation discovery is the area of machine learning that develops methods for automated discovery of quantitative laws, expressed in the form of equations, in collections of measured data. BACON is pioneer among equation discovery systems.
BACON is a family of algorithms for discovering scientific laws from data.
a) BACON.1 discovers simple numeric laws.
b) BACON.3 is a knowledge based system, has discovered simple empirical laws like physicists and shown its generality by rediscovering the Ideal gas law, Kepler's third law, Ohm's law and more.
2.4 Analogy: Learning by analogy means acquiring new knowledge about an input entity by transferring it from a known similar entity. This technique transforms the solutions of problems in one domain to the solutions of the problems in another domain by discovering analogous states and operators in the two domains.
Example: Infer by analogy the hydraulics laws that are similar to Kirchoff's laws.
2.5 Neural net and Genetic Learning: The Neural net, the Genetic learning and the Reinforcement learning are the Biology-inspired AI techniques. In this section the Neural net and Genetic learning are briefly described.
a) Neural Net (NN)
A neural net is an artificial representation of the human brain that tries to simulate its learning process. An artificial neural network (ANN) is often just called a "neural network" (NN).
■ Neural Networks model a brain learning by example.
■ Neural networks are structures "trained" to recognize input patterns.
■ Neural networks typically take a vector of input values and produce a vector of output values; inside, they train weights of "neurons".
■ A Perceptron is a model of a single `trainable' neuron.
a) Neural Net (NN)
A neural net is an artificial representation of the human brain that tries to simulate its learning process. An artificial neural network (ANN) is often just called a "neural network" (NN).
■ Neural Networks model a brain learning by example.
■ Neural networks are structures "trained" to recognize input patterns.
■ Neural networks typically take a vector of input values and produce a vector of output values; inside, they train weights of "neurons".
■ A Perceptron is a model of a single `trainable' neuron.
b) Genetic Learning: Genetic algorithms (GAs) are part of evolutionary computing. GA is a rapidly growing area of AI.
■ Genetic algorithms are implemented as a computer simulation, where techniques are inspired by evolutionary biology.
■ Mechanics of biological evolution: Every organism has a set of rules, describing how that organism is built, and encoded in the genes of an organism.
a) The genes are connected together into long strings called chromosomes.
a) The genes are connected together into long strings called chromosomes.
b) Each gene represents a specific trait (feature) of the organism and has several different settings, e.g. setting for a hair color gene may be black or brown.
c) The genes and their settings are referred as an organism's genotype.
d) When two organisms mate they share their genes. The resultant offspring may end up having half the genes from one parent and half from the other. This process is called cross over.
e) A gene may be mutated and expressed in the organism as a completely new trait.
■ Thus, Genetic Algorithms are a way of solving problems by mimicking processes the nature uses ie Selection, Crosses over, Mutation and Accepting to evolve a solution to a problem.
d) When two organisms mate they share their genes. The resultant offspring may end up having half the genes from one parent and half from the other. This process is called cross over.
e) A gene may be mutated and expressed in the organism as a completely new trait.
■ Thus, Genetic Algorithms are a way of solving problems by mimicking processes the nature uses ie Selection, Crosses over, Mutation and Accepting to evolve a solution to a problem.
2.6. Reinforcement Learning: Reinforcement learning refers to a class of problems in machine learning which postulate an agent exploring an environment.
a) The agent perceives its current state and takes actions.
b) The environment, in return, provides a reward positive or negative.
c) The algorithms attempt to find a policy for maximizing cumulative reward for the agent over the course of the problem.
In other words, the definition of Reinforcement learning is :
" A computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment."
Reinforcement learning is “a way of programming agents by reward and punishment without needing to specify how the task is to be achieved”.
a) The agent perceives its current state and takes actions.
b) The environment, in return, provides a reward positive or negative.
c) The algorithms attempt to find a policy for maximizing cumulative reward for the agent over the course of the problem.
In other words, the definition of Reinforcement learning is :
" A computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment."
Reinforcement learning is “a way of programming agents by reward and punishment without needing to specify how the task is to be achieved”.
Key Features of RL:
■ The learner is not told what actions to take, instead it find finds out what to do by trial-and-error search.
■ The environment is stochastic; ie., the behavior is non-deterministic means a "state" does not fully determine its next "state".
■ The reward may be delayed, so the learner may need to sacrifice short-term gains for greater long-term gains.
■ The reward may be delayed, so the learner may need to sacrifice short-term gains for greater long-term gains.
■ The learner has to balance between the need to explore its environment and the need to exploit its current knowledge.
0 Response to "Learning in Artificial Intelligence (Unit-3rd)"
Post a Comment