Types of agent program in ai




















The option of constructing an explicit lookup table is out of the question. The visual input from a single camera comes in at the rate of 50 megabytes per second 25 frames per second, pixels with 8 bits of color and 8 bits of intensity information.

So the lookup table for an hour would be 60 50M entries. For example, if the car in front brakes, and its brake lights come on, then the driver should notice this and initiate braking. We call such a connection a condition—action rule7 written as if car-in-front-is-braking then initiate-braking. Humans also have many such connections, some of which are learned responses as for driving and some of which are innate reflexes such as blinking when something approaches the eye.

In the course of the book, we will see several different ways in which such connections can be learned and implemented. Figure 2. Do not worry if this seems trivial; it gets more interesting shortly. We use rectangles to denote. It gets a percept as an input and returns an action.

First, based on the input tries to understand the state of the environment. Then in the static table, finds the corresponding rule to this state.

At last, returns the action of the rule. Model-based Reflex Agents. This type of agents is little bit more complicated than the reflex based agents. A model based agent holds an internal state based on the percept history. This internal state helps agent to handle a partially observable environment. It consider both internal state and current percept to take an action. Also, each step it updates the internal state.

Updating internal state requires two kinds of knowledge. Implementation of this, is called the model of the world and the agent that uses this model to decide what action to take called model based agent. Below is a pseudo code for it. It is rarely possible to determine exact state of a partially observable environment. Goal-Based Agents. For some tasks, its not always enough to know how the world works. In some cases, its desirable to define a goal information to describe a desirable situations.

To reach its goal. Goal based agents usually less efficient but more flexible than reflex-based agents. A goal based-agent can suit itself based on the environment. For example, a goal-based agent can adapt its behavior based on the sensor data.

Utility-Based Agents. Is that so? No, no no no. If we look at the goal based agents, we see they select the action based on the goal.

Problem generator — It is responsible for suggesting actions that will lead to new and informative experiences. In summary, agents have a variety of components, and those components can be represented in many ways within the agent program, so there appears to be great variety among learning methods.

Learning in intelligent agents can be summarized as a process of modification of each component of the agent to bring the components into closer agreement with the available feedback information, thereby improving the overall performance of the agent All agents can improve their performance through learning.

You are commenting using your WordPress. You are commenting using your Google account. You are commenting using your Twitter account. You are commenting using your Facebook account.

Notify me of new comments via email. Notify me of new posts via email. Home About. Stay updated via RSS. Structure of agents in artificial intelligence Posted: March 20, in artificial intelligence.

Share this: Twitter Facebook. Like this: Like Loading Leave a Reply Cancel reply Enter your comment here Fill in your details below or click an icon to log in:. Email required Address never made public.

Name required. Good behavior: the concept of rationality in artificial intelligence. Arithmetic opetiration using switch statement in C. Recent Comments Anonymous on Happy feast of St. Pages About Upcoming Events No upcoming events. Create a free website or blog at WordPress.



0コメント

  • 1000 / 1000