啟發式搜尋策略又通稱為最佳優先搜尋(best-first search, BFS),利用問題的特定知識 (Domain knowledge) 來搜尋解。在每個節點都利用評估函數 (evaluation function) $f(n_{frontier})$來判斷$n_{frontier}$ 中最好的選擇,在評估函數裡面採用啟發函式 $h(n_{frontier})$ 來輔助評估。
需要注意的是,啟發式的搜尋方式 (huersitics) 是一種估計,當然不可能完全正確,評估函數的準確度愈高,則愈可能找到最佳的節點。以下是一些常見的啟發式搜尋策略。
- 貪婪最佳優先搜尋 (Greedy best-first search)
- A* search
- Iterative-deepening A* search
貪婪最佳優先搜尋 (Greedy best-first search)
- 做法 : 利用評估函數 $f(n)= h(n)$ 來估計那一個候選狀態最接近目標。
- Quality of the result
- Completeness : No. 可能會落入loop中。
- Optimality : No. 不一定最佳(因為找到解就停止)。
- Time: $O(b^m)$, worst case is very bad. 但是好的評估函數能夠有決定性的影響。
- Space: $O(b^m)$, very bad.
A* 搜尋
- 做法 : 利用評估函數 $f(n) = g(n) + h(n)$ 來估計那一個候選狀態最接近目標。
- $g(n)$:從起始節點到節點$n$的成本函數。
- $h(n)$:從節點$n$到目標節點的估計函數。
- 想法:除了估計成本,也考量實際成本。
- Quality of the result
- Completeness : Yes, if finite number of $ f(n) \leq f^*(n) $ , monotonically increasing $f(n)$.
- Optimality : Yes, if finite number of $ f(n) \leq f^*(n) $ , monotonically increasing $f(n)$, $ h \left( n \right) \leq h^*\left( n \right)$
- Why finite number of $ f(n) \leq f^*(n) $ ? Otherwise we may have to visit infinite number of f(n) before we reach goal.
- Why monotonically increasing $f(n)$? To have a growing f contour to cover optimal path.
- Time: $O(b^m)$, keep all generated nodes, bad。
- Space: $O(b^m)$, exponential, bad.
A*搜尋的最佳性
- 說明 : 若啟發函數是可採納(admissible)的啟發函數,第一個被A*搜尋到的目標節點,一定是成本最小的目標節點。
- 可採納的(admissible):估計成本不會超過實際成本,$ h \left( n \right) \leq h^*\left( n \right)$,也就是不能高估成本。
$f(G') $, // evaluation cost of G'.
$= g(G') + h(G') $, // by evaluation cost definition.
$= g(G') $, // hueristics cost from G' to itself, zero.
$> g(G) $, // G' is suboptimal. Its accumulated cost greater than g(G).
$= g(n) + h^*(n) $, // h*(n) is the actual value
$\geq g(n) + h(n) $, // h is admissible
$= f(n)$
f(G')是suboptimal,所以$f(G') > f(G)$,所以$g(G') > g(G)$,記住g function代表accumulated cost。以上證明的策略的概念是 : 任意節點n在最佳路徑上,使得$f(G') > f(n)$,則永遠不可能走到$G'$,因為$G'$評估值較大。
證明 (2) 對Graph來說,A*要達成optimality還要consistent:這個lemma是說如果h(n)是consistent,則每次能夠符合展開條件的節點會形成一個f contour,f non-decreasing,類似等高線圖。
Iterative-deepening A*Search
- 做法 : Run A* with iteratively increasing cost limit.Also a preferred method. (Borrow the idea of iterative-deepening DFS)
- Instead of depth, the limit is on the cost.
- Similar properties to A*, but often cheaper.
- 想法:以時間換取空間,紀錄除自己外花費成本最小的分支,當搜尋成本超過該紀錄的成本,則刪除相對應的搜尋子樹,轉往該紀錄分支搜尋。
- 優點:使用線性的儲存空間。
- 缺點:重複產生大量節點。
Summary - Special Cases of Best-first Search
Many of the standard AI search methods can be viewed as special cases of best-first search (A*) :
- Depth-first search occurs when g= 0, and h = (previous path length)-1. NODES is treated as a stack in this case.
- Breadth-first search occurs when g= previous path length, and h= 0. NODES is treated as a queue in this case.
- Greedy search occurs when g= 0, and hestimates the cost to the goal. NODES is treated as a priority queue in this case.
- Uniform-cost search occurs when g= the cost of a path thus far, h= 0. NODES is treated as a priority queue in the queue.
- A* search occurs when g= the cost of a path thus far, h= an admissible estimate of the cost to a goal state, and NODESis treated as a priority queue.
By varying g and h, we can create different search algorithms.
沒有留言:
張貼留言