Saturday, August 11, 2012

Introduction to AI Agent (Part I)

Introduction to Agents 
An agent acts in an environment.

An agent perceives its environment through sensors. The complete set of inputs at a given time is called a percept. The current percept or a sequence of percepts can influence the actions of an agent. The agent can change the environment through actuators or effectors. An operation involving an effector is called an action. Actions can be grouped into action sequences. The agent can have goals which it tries to achieve.
Thus, an agent can be looked upon as a system that implements a mapping from percept sequences to actions.
A performance measure has to be used in order to evaluate an agent.
An autonomous agent decides autonomously which action to take in the current situation to maximize progress towards its goals. 

Examples of Agents 
 1. Humans can be looked upon as agents. They have eyes, ears, skin, taste buds, etc. for sensors; and hands, fingers, legs, mouth for effectors.

2. Robots are agents. Robots may have camera, sonar, infrared, bumper, etc. for sensors. They can have grippers, wheels, lights, speakers, etc. for actuators.
Some examples of robots are Xavier from CMU, COG from MIT, etc. 

Xavier Robot (CMU)

Then we have the AIBO entertainment robot from SONY. 

Aibo from SONY
         3. We also have software agents or softbots that have some functions as sensors and some functions as actuators. Askjeeves.com is an example of a softbot.
       4. Expert systems like the Cardiologist is an agent.
       5. Autonomous spacecrafts.
       6. Intelligent buildings.

Intelligent Agents 
An Intelligent Agent must sense, must act, must be autonomous (to some extent), It also must be rational.
AI is about building rational agents. An agent is something that perceives and acts.
A rational agent always does the right thing. Following are the characteristics of an intelligent agent:
  • Rationality
Perfect Rationality assumes that the rational agent knows all and will take the action that maximizes her utility. Human beings do not satisfy this definition of rationality.
Rational Action is the action that maximizes the expected value of the performance measure given the precept sequence to date.
  • Bounded Rationality
“Because of the limitations of the human mind, humans must use approximate methods to handle many tasks.” Herbert Simon, 1972
Evolution did not give rise to optimal agents, but to agents which are in some senses locally optimal at best. In 1957, Simon proposed the notion of Bounded Rationality: that property of an agent that behaves in a manner that is nearly optimal with respect to its goals as its resources will allow.
Under these promises an intelligent agent will be expected to act optimally to the best of its abilities and its resource constraints.
  • Agent Environment
Environments in which agents operate can be defined in different ways. It is helpful to view the following definitions as referring to the way the environment appears from the point of view of the agent itself. 
  • Observability
In terms of observability, an environment can be characterized as fully observable or partially observable.
In a fully observable environment all of the environment relevant to the action being considered is observable. In such environments, the agent does not need to keep track of the changes in the environment. A chess playing system is an example of a system that operates in a fully observable environment.
In a partially observable environment, the relevant features of the environment are only partially observable. A bridge playing program is an example of a system operating in a partially observable environment. 
  • Determinism
In deterministic environments, the next state of the environment is completely described by the current state and the agent’s action. Image analysis systems are examples of this kind of situation. The processed image is determined completely by the current image and the processing operations. 
  • Episodicity
An episodic environment means that subsequent episodes do not depend on what actions occurred in previous episodes.
In a sequential environment, the agent engages in a series of connected episodes.
  • Dynamism
Static Environment: does not change from one state to the next while the agent is

considering its course of action. The only changes to the environment are those caused by the agent itself. 
            • A static environment does not change while the agent is thinking.
            • The passage of time as an agent deliberates is irrelevant.
            • The agent doesn’t need to observe the world during deliberation.

A Dynamic Environment changes over time independent of the actions of the agent -- and thus if an agent does not respond in a timely manner, this counts as a choice to do nothing.
  • Continuity
If the number of distinct percepts and actions is limited, the environment is discrete, otherwise it is continuous

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.