CS 440: Assignment 3 – Probabilistic Search (and Destroy) 16:198:
Algorithm | assignment – 该题目是一个常规的Algorithm的练习题目代写, 是比较有代表性的Algorithm等代写方向, 这个项目是assignment代写的代写题目
The purpose of this assignment is to model your knowledge/belief about a system probabilistically, and use this belief state to efficiently direct future action.
The Problem: Consider a landscape represented by a map of cells. The cells can be of various terrain types (flat, hilly, forested, a complex maze of caves and tunnels) representing how difficult they are to search. Hidden somewhere in this landscape is aTarget. Initially, the target is equally likely to be anywhere in the landscape, hence starting out:
P(Target in Celli) =
# of cells
This represents yourprior beliefabout where the target is. You have the ability tosearcha given cell, to determine whether or not the target is there. However, the more difficult the terrain, the harder is can be to search – the more likely it is that even if the target is there, you may not find it (a false negative). We may summarize this in terms of the false negative rates:
P(Target not found in Celli|Target is in Celli) =
0. 1 if Celliisflat 0. 3 if Celliishilly 0. 7 if Celliisforested 0. 9 if Celliisa maze of caves
The false positive rate for any cell is taken to be 0, i.e.,P(Target found in Celli|Target not in Celli) = 0. Whatever the result of a search, however, it does providesomeuseful information about the location of the target, lowering the likelihood of the target being in the searched cell and raising the likelihood of it being in others.
If you find the target, you are done. The goal is to locate the target inas few searches as possibleby utilizing the results of the observations collected.
Implementation: Maps may be generated in the following way: for a 50 by 50 grid, randomly assign each cell a terrain type, (flat with probability 0.25, hilly with probability 0.25, forested with probability 0.25, and caves with probability 0.25). Select a cell uniformly at random, and set this as the location of the target. Note, this information represents theenvironment. The location of the target is hidden from your agent, but the agent knows each cell type and can query the environment about a requested cell.
Figure 1: A simple 10 x 10 map with indicated target.
For youragent, construct a 50 x 50 array of values representing the agents currentbelief state, the probability given everything observed so farthat the target is in a given cell, i.e., at timet
Belief[Celli] =P(Target in Celli|Observations through timet). (3)
Note, at timet= 0, we have Belief[Celli] = 1/2500.
At any timet, given the current state ofBelief, the agent must determine a cell to check by some rule andsearch it. If the cell does not contain the target, the environment will returnfailure. If the environment does contain the target, the search will returnfailureorsuccesswith the appropriate probabilities, based on the terrain type. If the search returns failure, for whatever reason, use this observation about the selected cell to update your belief state for allcells (using Bayes Theorem). If the search returns success, return the total number of searches taken to locate the target.
1) Given observations up to timet(Observationst), and a failure searching Cellj(Observationst+1= Observationst Failure in Cellj), how can Bayes theorem be used to efficiently update the belief state? i.e., compute:
P(Target in Celli|ObservationstFailure in Cellj). (4)
2) Given the observations up to timet, the belief state captures thecurrent probability the target is in a given cell. What is the probability that the target will befoundin Celliif it is searched:
P(Target found in Celli|Observationst)? (5)
3) Consider the following situation. Your agent is dropped into the map at a random location and wants to find the target as quickly as possible. At every time step, the agent can either a) search the current cell the agent is in, or b) move to one of the immediate neighbors (up/down/left/right). We can consider the following basic agents:
- Basic Agent 1: Iteratively travel to the cell with the highest probability of containing the target, search that cell. Repeat until target is found.
- Basic Agent 2: Iteratively travel to the cell with the highest probability offindingthe target within that cell, search that cell. Repeat until the target is found.
For both agents, ties in probability between cells should be broken based on shortest distance (minimal manhattan distance), and broken at random between cells with equal probability and equal shortest distance. The final performance of an agent is taken to be total distance traveled + number of searches, and we want this number to be as small as possible. Generate 10 maps, and play through each map (with random target location and initial agent location each time) 10 times with each agent. Which agent is better, on average?
4) Design and implement an improved agent and show that it beats both basic agents. Describe your algorithm, and why it is more effective than the other two. Given world enough, and time, how would you make your agent even better?
Bonus: A Moving Target
In this section, the target is no longer stationary, and can move between neighboring cells (up/down/left/right). Each time you perform a search, if you fail to find the target, the target will move to a neighboring cell (with uniform probability for each). However, all is not lost – every time you search a cell, you are now given two pieces of information instead of one: first, you are told whether or not the search was successful (same false negative rates as before); and if the search was unsuccessful, you are told whether or not the target is within Manhattan distance 5 of your current location.
- Adapt Basic Agents 1 and 2 to this new situation and extra information. What modifications to your proba- bilities did this require? Are they still able to find the target? Which one is better?
- Adapt your Improved Agent to this new situation and extra information. What modifications does your Algorithm require? Is it still able to find the target? Does it still beat Basic Agents 1 and 2?