代做report | Algorithm – Final Review

Final Review

代做report | Algorithm – 该题目是一个常规的report的练习题目代写, 是比较典型的report/Algorithm等代写方向

算法代写 代写算法 Algorithm代写 代写Algorithm  算法作业代写

Hang Ma, SFU
Exam will be closed- book and cumulative.

Topics that are not covered in this review might still be covered in the exam.

These slides are new and can contain mistakes and typos.
Please  report them to Hang ([email protected]).

Skeleton of Search Algorithms

####### 1. Start with a tree that contains only one node,

####### labeled with the start state.

####### 2. If there are no unexpanded fringe nodes, stop unsuccessfully.

####### 3. Pick an unexpanded fringe node n. Let s(n) be the state it is labeled with.

####### 4. If s(n) is a goal state, stop successfully

####### and return the path from the root node to n in the tree

####### (i.e. return the sequence of states that label the nodes on the path).

####### 5. Expand n, that is, create a child node of n for each of the successor states

####### of s(n), labeled with that successor state.

####### 6. Go to 2.

Skeleton of Search Algorithms

  • The search algorithms differ only in how they select the unexpanded

####### fringe node.

  • If no knowledge other than the current tree is available to guide the decision, then a search Algorithm is called uninformed(or blind).
  • Otherwise, a search algorithm is called informed. If the knowledge consists of estimates of the goal distances of the states, the informed search algorithm is called heuristic. By goal distance, we mean the minimum cost of any path (action sequence) from the start state to any goal state.

Uninformed Search

  • Breadth-First Search (operator cost = one)
    • Pick an unexpanded fringe node n with the smallest depth
    • Duplicate detection/pruning, early termination
  • Uniform-Cost Search (operator cost = positive)
    • Pick an unexpanded fringe node n with the smallest cost g(n)
    • Duplicate detection/pruning, no early termination
  • Depth-First Search
    • Pick an unexpanded fringe node n with the largest depth
    • Optional duplicate detection/pruning along one branch, early termination
  • (Depth-First) Iterative-Deepening Search
    • Combine the best properties of breadth-first and depth-first searches
    • Implement a breadth-first search with a series of depth-first searches with increasing depth limits

Skeleton of Search Algorithms

  • The search algorithms differ only in how they select the unexpanded

####### fringe node.

  • If no knowledge other than the current tree is available to guide the decision, then a search algorithm is called uninformed(or blind).
  • Otherwise, a search algorithm is called informed. If the knowledge consists of estimates of the goal distances of the states, the informed search algorithm is called heuristic. By goal distance, we mean the minimum cost of any path (action sequence) from the start state to any goal state.

Heuristics

  • The h-value (= heuristic value) of a state approximates its goal

####### distance. It should be close to the goal distance without going over.

  • Admissible:
    • 0 h(s) gd(s) for all states s.
  • Consistent:
    • h(s) = 0 for all goal states s, and 0 h(s) c(s,s) + h(s) for all non-goal states s and their successor states s.
  • Dominating:
    • H-values h(s) dominate h-values h(s) if and only if, for all states s, h(s) h(s).

Informed Search

  • Pure Heuristic (Greedy Best-First) Search (operator cost = positive)
    • Pick an unexpanded fringe node n with the smallest f(n) = h(s(n))
    • Duplicate detection/pruning, early termination
  • A* Search (operator cost = positive)
    • Pick an unexpanded fringe node n with the smallest f(n) = g(n) + h(s(n))
    • Duplicate detection/pruning by default, no early termination
  • Iterative Deepening A* (IDA*) (operator cost = positive)
    • Combine the best properties of A* and depth-first searches, which can be necessary since A* still needs an exponential amount of memory
    • Implement an A* search with a series of depth-first searches with increasing f- value limits

Enhanced A* with Collapse and Restore

Macros

  • IDA:* The end of an iteration can be seen as collapsing the entire tree

####### into the start node with the f-value of the minimal frontier node

  • Simplified Memory Bounded A (SMA):** Once OPEN is too large,

####### areas with the highest f-values in OPEN are collapsed

  • RBFS/ILBFS: Only one branch of the tree (plus the siblings of nodes of

####### that branch) are kept in memory; any frontier below these nodes is

####### collapsed

  • A with Lookahead:* DFS lookahead within a depth/cost limit;

####### lookahead subtree is collapsed

Partial Expansion

  • Partial Expansion A*
    • One-step lookahead
    • Child nodes exceeding the cost limit K are collapsed (updating K = the big F value of the parent node)
    • Surplus nodes are generated but are never added to OPEN
  • Enhanced Partial Expansion A* (EPEA*)
    • Only those with f=K are generated
    • Need to construct and memorize an Operator Selection Function for each node that determines which operators result in each possible f

Multiple Heuristics

  • A* max
    • Use the maximum, get a dominating heuristic
    • Preserve consistency
    • Overhead of lookup and diminishing return
  • Random selection
    • Can get informative heuristic
    • Consistency not preserved
  • Lazy A* (for cheap h 1 and expensive h 2 )
    • Lazy evaluation
    • As strong as A* max
    • Trade off between OPEN operations and h 2 evaluation

Generating Heuristics

  • Problem relaxation
    • Compute solution costs for a relaxed problem
    • Remove constraints, add edges to the state space
  • Pattern Databases (PDBs)
    • Memorizing solution costs/true distances in abstract spaces
  • True Distance Heuristics
    • Memorizing pairwise true distances for a subset of states
    • Differential/Canonical/Boarder heuristics

PDBs

  • Disjoint Additive PDBs
    • Can sum up heuristic values instead of taking the max
    • Cost of a subproblem is composed from costs of corresponding pattern objects only
  • Compressed PDBs
    • Can get more informative heuristics than more highly abstracted PDBs with the same memory size

Symmetries in PDBs

  • Idea: Get another heuristic for free by looking up the PDB differently
  • Geometrical symmetric lookup
  • Dual lookup
    • Dual representation: exchanging the roles of objects and locations

Inconsistent Heuristics

  • A* and IDA* are still optimal with inconsistent heuristics
    • Node reopening/reexpansion
    • Can be partially corrected by (Bidirectional) Pathmax
    • Can be achieved by
      • Random selection of (consistent) heuristics (out of K heuristics)
      • Dual lookups
      • Compressed pattern databases

(Bounded) Suboptimal Search

  • Bounded suboptimal search:
    • Given a bound W we want the solution to be at most W x C* This is called w- admissible.
  • Bounded cost search
    • Given B, find a solution below B.
  • Anytime search
    • Improve the quality as time passes.
    • Can be obtained from a bounded suboptimal/cost search with increasing bounds.

(Bounded) Suboptimal Search

  • Weighted A*
    • f(n) = Wgg(n) + Whh(n) or f(n) = g(n) + Wh(n)
    • Different W to trade off between time and solution quality
  • Potential Search (example of bounded-cost search)
    • Chhose node with maximal potential
  • Focal Search
    • Use FOCAL to select promissing nodes within a FOCAL bound
    • Example: Dynamic Potential Search

Local Search

  • Start from initial position
  • Iteratively move from current position to neighboring position
  • Use evaluation function for guidance
  • Two main classes:
    • Local search on partial solutions
    • Local search on complete solutions

Local Search with Partial Solutions

  • Depth-First BnB with node ordering
    • Can use lower bounds to prune branches
  • Limited Discrepancy Search
    • Anytime algorithm
    • Solutions are ordered according to (discrepancy against) heuristics
Local Search (notations here assuming
maximization problems) on Complete Solutions
  • Hill climbing
    • Problem with local optima
    • Random restarting often helps
    • Randomized iterative improvement (transitioning to neighboring position) often helps
  • Simulated annealing
    • Worse solutions are accepted with probability ~ exp ( ()) for maximization
    • T goes (cools) down as time goes
  • Genetic algorithms
    • How to generate the next generation.
      • Selection: we select a number of states from the current generation. (we can use the fitness function in any reasonable way)
      • Crossover : select 2 states and reproduce a child.
      • Mutation: change some of the genes.

Multi-Agent Path Finding (MAPF)

  • Priority-Based Search (Cooperative A*)
    • Non-optimal and incomplete
  • Conflict-Based Search (CBS)
    • Optimal
    • Best-first search
    • With disjoint splitting: solution space under each child node is disjoint, pruning opportunities compared to non-disjoint
  • A* Search with joint states
    • Independence Detection
    • Operator Decomposition
    • EPEA*
  • Increasing Cost Tree Search (ICTS)
    • Branch on different combinations for an increased (cumulative) cost

CBS Improvements

  • Heuristics for CBS
    • (Cardinal) Conflict Graph (CB)
      • Cardinal: Resolving the conflict result in increased costs for both child nodes
    • Dependency Graph (DG)
      • Dependency: Two-agent MAPF has a solution cost larger than the sum of the costs of the two agents
    • Weighted Dependency Graph (WDG)
      • Edge weight is the cost difference
  • Meta-Agent CBS
    • Combine the advantage of A* + ID and the basic CBS