-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1_intelligent_agents_fall_2021 #6
base: master
Are you sure you want to change the base?
Conversation
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all, lecture notes must be in markdown format. It means you should have a .md file as output, not a Jupyter notebook!
" <br>\n", | ||
" <br>\n", | ||
" <br>\n", | ||
" <h1 style=\"font-size: 40px; margin: 10px 0;\">AI - Intelligent Agent</h1>\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Convert "Agent" to "Agents"
"source": [ | ||
"# Intelligent agents\n", | ||
"An <b>intelligent agent</b> is anything that perceives its environment through sensors and acts upon that environment through its actuators. \n", | ||
" we will use the term <b>percept</b> to refer to the agent's perceptual inputs at any given moment.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every sentence must be started with a capital case letter and obviously, there are some similar problems in next sections. Please revise them all.
"metadata": {}, | ||
"source": [ | ||
"# Rational agents and performance measure\n", | ||
"a <b>rational</b> agent choose the set of action in order to maximize its performance. agents use a performance measure to evaluate the desirability of any given sequence. In other words, an agent will choose the action (or a sequence of them) that maximize the expected value of its performance measure." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change "choose" to "chooses"
"metadata": {}, | ||
"source": [ | ||
"#### Rationality vs perfection\n", | ||
"Keep in mind that rationality is distinct from omniscience. an omniscience agent knows the actual outcome of its actions but in reality, an agent only knows the expected outcome of its action.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
an omniscience agent -> an omniscient agent
provide an example to clarify this
"#### Rationality vs perfection\n", | ||
"Keep in mind that rationality is distinct from omniscience. an omniscience agent knows the actual outcome of its actions but in reality, an agent only knows the expected outcome of its action.\n", | ||
"#### Autonomy\n", | ||
"a rational agent should be autonomous meaning it mustn't only rely on the prior knowledge of its designer and must learn to compensate for partial or incorrect prior knowledge. In other words, rational agents should learn from experience. for example, in the vacuum world our agent could start to learn when the rooms usually get dirty based on its experience." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of your sentences look very similar to the reference book! It would be better if you try to express them in your own words.
"source": [ | ||
"# Task environment (PEAS)\n", | ||
"we have already talked about performance measure, task environment, actuators and sensors. we group all these under the heading of the <b>Task enviroment </b> and we abbreviate it as <b>PEAS</b>(<b>P</b>erformance measure, <b>E</b>nviroment, <b>A</b>ctuators, <b>S</b>ensors). When designing an agent our first step should be specifying the task enviroment.\n", | ||
"#### Types of environment\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part is too brief. You should explain them in much more detail using examples. The reader should gain more information when he/she studies your markdown in comparison with slides!
"metadata": {}, | ||
"source": [ | ||
"#### PEAS example\n", | ||
"here are a few example of specifying PEAS for different agents.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
example -> examples
"metadata": {}, | ||
"source": [ | ||
"# Type of agents\n", | ||
"In this section we will introduce three basic kinds of basic agent programs.(The agent program is simply a program which implement the agent function.)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
implement -> implements
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Type of agents\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Try to expand this part. There are other types of agents that aren't in slides but you can cover them here.
"metadata": {}, | ||
"source": [ | ||
"## Goal-based agents\n", | ||
"This kind of agent has a specific goal and its tries to reach that goal efficiently. They have a model of how the world evolves in response to actions and they make decisions based on (hypothesized) consequences of actions to reach their goal state. Search and Planning are two subfields that are closely tied with these kind of agents. In other words, this kinds of agents act on <b>how the world WOULD BE.</b> \n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its -> it
"source": [ | ||
"## Reflex agents\n", | ||
"This is the simplest kind of agent. they choose their next action only based on their current percept. In other words, they do not consider the future consequences of their actions and only consider <b>how the world IS.</b> \n", | ||
"as an example look at this Pacman agent below at each turn the agent look at its surrounding and chooses the direction that has a point in it and stops when there are no points around it.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Punctuation after "below"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Try to write out more details in commented sections.
- Find out the wrong usages of grammar in your text.
- Start every sentence with a capital case letter.
- Try to use your own words in sentences.
- [Conclusion](#Conclusion) | ||
- [References](#References) | ||
|
||
# Introduction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Write out at least a paragraph for this section and try to explain why this topic is important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not actually completed yet!
This kind of agent like goal-based agents has a goal. But they also have a Utility function they seek to reach their goal in a way that maximizes the utility function. For example, think about an automated car agent. They are many ways for this agent to get from point A to point B. But some of them are quicker, safer, cheaper. The utility function allows the agent to compare different states with each other and ask the question how happy am I in this state. | ||
In other words, this kind of agent act on <b>how the world will LIKELY be.</b> | ||
|
||
# Conclusion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this part, it's better to write some sentences instead of just listing sub-topics. For example:
"We discussed intelligent agents which are ... "
"We also tried to explain PEAS using some examples ... "
- [Properties of task environments](#Properties-of-task-environments) | ||
- [Types of environment](#Types-of-environment) | ||
- [Types of environment example](#Types-of-environment-example) | ||
- [Type of agents](#Type-of-agents) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Types"
An <b>intelligent agent</b> is anything that perceives its environment through sensors and acts upon that environment through its actuators. | ||
We will use the term <b>percept</b> to refer to the agent's perceptual inputs at any given moment. | ||
We can describe an agent's behavior by the agent function. | ||
<b>Agent function</b> maps any given percepts sequence to an action. But how does the agent know what sequence it must choose? we will try to answer this question using a simple example. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"any given percept"
|
||
|
||
#### Rationality vs perfection | ||
Keep in mind that rationality is distinct from omniscience. An omniscient agent knows the actual outcome of its actions but in reality, an agent only knows the expected outcome of its action. For example, imagine your trying to cross the street and no cars are on the street Naturally, you will cross the street to reach your goal. now imagine as you are passing the street a meteorite falls on you. Can anyone blame you for being irrational and not expecting a meteorite to flatten you? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imagine you're - naturally - Now
# Properties of task environments | ||
|
||
#### Types of environment | ||
we can categorize an environment in many ways, you will find some of the most important ones listed below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We
we can categorize an environment in many ways, you will find some of the most important ones listed below. | ||
|
||
<ul> | ||
<li><b>Fully observable or partially observable</b> (Do the agent sensors give access to the complete state of the environment at each time?)</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agent's sensors
|
||
<li><b>Single agent or multiagent</b> (Are there more than one agent in the environment?)</li> | ||
<ul> | ||
<li>We say an environment is a multiagent environment if there is more than one agent operating in it otherwise we say the environment is sigle agent.</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we say the environment is single agent.
</ul> | ||
<br> | ||
|
||
<li><b>Single agent or multiagent</b> (Are there more than one agent in the environment?)</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using hyphen is better: single-agent and multi-agent
<ul> | ||
<li>We say an environment is a multiagent environment if there is more than one agent operating in it otherwise we say the environment is sigle agent.</li> | ||
<li>In some cases, we can model our environment both as a single agent and multiagent environment. For example, imagine an automatic taxi agent. Should this agent treat the other cars as objects or as another agent? It's better to model our environment as a multiagent environment if the behavior of the other entities can be modeled as an agent seeking to maximize its performance measure which is somehow affected by our agent.</li> | ||
<li>a multiagent environment could be competitive or cooperative or even a mix of both.</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A multi-agent
<li>We say an environment is a multiagent environment if there is more than one agent operating in it otherwise we say the environment is sigle agent.</li> | ||
<li>In some cases, we can model our environment both as a single agent and multiagent environment. For example, imagine an automatic taxi agent. Should this agent treat the other cars as objects or as another agent? It's better to model our environment as a multiagent environment if the behavior of the other entities can be modeled as an agent seeking to maximize its performance measure which is somehow affected by our agent.</li> | ||
<li>a multiagent environment could be competitive or cooperative or even a mix of both.</li> | ||
<li><b>examples</b>: chess and automatic driving are multiagent environments. solving a crossword puzzle is a single agent environment.</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chess and ... - Solving ...
</ul> | ||
<br> | ||
|
||
<li><b>Episodic or sequential</b> (Is the agent's experience divided into atomic "episodes“ where the choice of action in each episode depends only on the episode itself?)</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double quotations are not in the same format, the first one is " while the other is “
|
||
<li><b>Episodic or sequential</b> (Is the agent's experience divided into atomic "episodes“ where the choice of action in each episode depends only on the episode itself?)</li> | ||
<ul> | ||
<li>We say an environment is episodic if the agent experience can be divided into atomic "episodes" In a way that the action taken in an episode is independent of the previous episodes actions.</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agent's experience - in a way
<ul> | ||
<li>We say an environment is episodic if the agent experience can be divided into atomic "episodes" In a way that the action taken in an episode is independent of the previous episodes actions.</li> | ||
<li>We say an environment is sequential if the current decision could affect all future decisions. </li> | ||
<li><b>examples</b>: Chess and automatic driving are sequential. a part picking robot is episodic.</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A part picking ...
<li><b>Static or dynamic</b> (Is the environment unchanged while an agent is deliberating?)</li> | ||
<ul> | ||
<li>We say an environment is dynamic if it can change while the agent is deliberating.</li> | ||
<li>There is a special case that the environment doesn't change but the performance score has a time penalty we call these environments semi-dynamic.</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after "penalty" use a punctuation
<br> | ||
<li><b>Discrete or continuous</b> (Are there a limited number of distinct, clearly defined states, percepts, and actions?)</li> | ||
<ul> | ||
<li>We say an environment's state is discrete if there are a finite number of distinct states otherwise we say the environment's state in continuous.</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we say the environment's state is continuous.
</ul> | ||
|
||
#### Types of environment example | ||
Here are a few examples of Identifying an environment's different dimensions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
identifying
#### Types of environment example | ||
Here are a few examples of Identifying an environment's different dimensions. | ||
|
||
| environment| Fully observable? | Deterministic? | Episodic? | Static? | Discrete?|Single agent?| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use Environment in first row
|
||
## Reflex agents | ||
This is the simplest kind of agent. They choose their next action only based on their current percept. In other words, they do not consider the future consequences of their actions and only consider <b>how the world IS.</b> | ||
As an example look at this Pacman agent below, at each turn the agent look at its surrounding and chooses the direction that has a point in it and stops when there are no points around it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks at
</ul> | ||
|
||
## Reflex agents | ||
This is the simplest kind of agent. They choose their next action only based on their current percept. In other words, they do not consider the future consequences of their actions and only consider <b>how the world IS.</b> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
simplest kind of agents
|
||
## Goal-based agents | ||
This kind of agent has a specific goal and it tries to reach that goal efficiently. They have a model of how the world evolves in response to actions, and they make decisions based on (hypothesized) consequences of actions to reach their goal state. Search and Planning are two subfields that are closely tied with these kinds of agents. In other words, these kinds of agents act on <b>how the world WOULD BE.</b> | ||
as an example look at this Pacman agent below. the goal is to collect every point. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal is
|
||
|
||
## Utility-based agents | ||
This kind of agent like goal-based agents has a goal. But they also have a Utility function they seek to reach their goal in a way that maximizes the utility function. For example, think about an automated car agent. They are many ways for this agent to get from point A to point B. But some of them are quicker, safer, cheaper. The utility function allows the agent to compare different states with each other and ask the question how happy am I in this state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But they also have a utility function.
and use a period to end the sentence
|
||
## Utility-based agents | ||
This kind of agent like goal-based agents has a goal. But they also have a Utility function they seek to reach their goal in a way that maximizes the utility function. For example, think about an automated car agent. They are many ways for this agent to get from point A to point B. But some of them are quicker, safer, cheaper. The utility function allows the agent to compare different states with each other and ask the question how happy am I in this state. | ||
In other words, this kind of agent act on <b>how the world will LIKELY be.</b> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
acts on
|
||
|
||
## Learning agents | ||
This kind of agent usually has 4 parts. the most important two are "the learning element", which is responsible for making improvements, and the "performance element", which is responsible for selecting external actions. The learning element uses feedback from a "critic" on how the agent is doing and determines how the performance element, or "actor", should be modified to do better in the future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The most
## Learning agents | ||
This kind of agent usually has 4 parts. the most important two are "the learning element", which is responsible for making improvements, and the "performance element", which is responsible for selecting external actions. The learning element uses feedback from a "critic" on how the agent is doing and determines how the performance element, or "actor", should be modified to do better in the future. | ||
The last part of these agents is the "problem generator" which is responsible for suggesting actions that will lead to new unexplored states. | ||
These agents try to do their best by both exploring the environment and using the gathered information to decide rationally. one of the advantages of Learning agents is that they can be deployed in an environment that they don't have a lot of prior knowledge on. they will gain this knowledge over time by exploring that environment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the advantages of learning agents is - They will gain
|
||
# Conclusion | ||
We discussed the concept of an intelligent agent and the difference between a rational agent and a perfect agent. | ||
then we talked about specifying the task environment for an agent and how can we categorize some main concepts of an environment. We also talked about some agent architectures that are commonly used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then
how we can categorize
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some problems with grammar, etc. that should be solved. But the content is good!
@nimajam41