Dynamic programming is a powerful technique for solving problems that might otherwise appear to be extremely difficult to solve in polynomial time. Algorithms built on the dynamic programming paradigm are used in many areas of CS, including many examples in AI (from solving planning problems to voice recognition).
Dynamic programming works by solving subproblems and using the results of those subproblems to more quickly calculate the solution to a larger problem. Unlike the divide-and-conquer paradigm (which also uses the idea of solving subproblems), dynamic programming typically involves solving all possible subproblems rather than a small portion.
Often, dynamic programming algorithms are visualized as "filling an array" where each element of the array is the result of a subproblem that can later be reused rather than recalculated. One way of calculating fibonacci numbers is to use the fact that
fibonacci(n) = fibonacci(n-1) + fibonacci(n-2)
And then write a recursive function such as
int fibonacci(int n)
{
if(n == 0)
{
return 0;
}
if(n == 1)
{
return 1;
}
else
{
return fibonacci(n - 1) + fibonacci(n - 2);
}
}
This algorithm is excruciatingly slow (it will take exponential time to run) because every time fibonacci(n - 1) is calculated, it will recalculate every value of fibonacci(n - 2) to fibonacci(0). The heart of dynamic programming is to avoid this kind of recalculation by saving the results. In the case of fibonacci numbers, other, even simpler approaches exist, but the example serves to illustrate the basic idea.
One use of dynamic programming is the problem of computing "all pairs shortest paths" in a weighted graph. Although Dijkstra's algorithm solves the problem for one particular vertex, it would be necessry to run it for every vertex, and would be a somewhat complicated algorithm. Dynamic programming produces a simpler, cleaner algorithm (though one that is not inherently faster).
The trick here is to find a useful subproblem that, by solving, we can use to solve a larger problem. In this case, it turns out that a useful subproblem is to consider the shortest path for any pair of vertices that goes through only the first k vertices. For simplicity, we'll just number each vertex 1 through n, where n is the total number of vertices. Now, each subproblem is just to find the distance from vertex i to vertex j using only the vertices less than k.
We'll store the distances of all the shortest paths from i to j that don't use k in a two-dimensional array, D_k, indexed by the start and end vertices, storing the value of the shortest path using only vertices numbered less than k as intermediate nodes. (It's OK for one of the vertices to be numbered greater than or equal to k.)
D_k[i, j] = length of shortest path from i to j going through intermediate vertices
less than k
Now, let's say that we've already processed D_k for k from 1 to n. That means we have all of the shortest paths from any two nodes i and j where that path doesn't include any vertex labelled greater than n. Now, we know that any path for vertices i to j that may include node n+1 is either going to be the same as before, or it will include the vertex n+1, in which case the path must be a combination of the shortest path from i to n+1 and from n+1 to j; both of these paths, of course, using intermediate nodes less than n+1 (i.e., using D_n[i, n+1] and D_n[n+1, j].
We can express the definition of D_k in the following form (notice that anything in braces is just being grouped together -- D_{k-1} is the distance array for k-1, for instance).
D_k[i, j] = min(D_{k-1}[i, j], D_{k-1}[i, k] + D_{k-1}[k, j])
Once the definition is expressed in this recursive format, it should be easy to see how this could be coded up -- we just fill a three-dimensional array with rows and columns for each vertex and a depth equal to the number of vertices. But we can actually do a bit better than this! Notice that we never need to keep around more than the current array being calculated and the last array calculated. (In fact, it turns out that all we really need is one array, so we can get away with using O(|V|^2) space.)
The key point to take away is that using dynamic programming, we can reduce the problem of finding all of the shortest paths to solving a series of subproblems that can be reused again and again to solve larger problems. Whenever you try to solve a problem using dynamic programming, think about how you can break the problem up into subparts--perhaps you really only need to computer a small number of subparts (in which case you might be able to use a divide-and-conquer aproach, a la merge sort)--but if you find that you need to know every possible subproblem, then dynamic programming may be the right solution.
Another point to keep in mind is that it often helps to solve simpler versions of the same problem before trying to tackle the full task at hand. For instance, you might have noticed that in the all-pairs shortest paths problem, we actually solved a slightly simpler problem: what is the distance of the shortest path from i to j. This problem is a bit easier to conceptualize and write a recurrence for. But now that we have the idea, it's easy to turn it into a solution to the actual problem in question--we just add in a new array that stores the intermediate node, k, through which the shortest path extends. In this way, the problem is reduced to finding the shortest path from i to k and from k to j, which we can again solve. (The base case here is just when the route between two nodes is a direct path that goes through no additional vertices.)
Moreover, the code for dynamic programming can be surprisingly simple. The pseudocode for the above example can be written in only a few lines, and it turns out that we only need to fill a single array (as long as we fill in a reasonable order).
/* Note that this is c-like pseudo-code */
int D[n][n];
/* initialize D so that each edge, (i, j), has the correct weight at D[i][j]
* and all other elements of the array are set to INFINITY */
/* n is the number of vertices */
for(int k = 0; k < n; k++)
{
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
if(D[i][j] > D[i][k] + D[k][j])
{
D[i][j] = D[i][k] + D[k][j];
}
}
}
}
Dynamic programming works by solving subproblems and using the results of those subproblems to more quickly calculate the solution to a larger problem. Unlike the divide-and-conquer paradigm (which also uses the idea of solving subproblems), dynamic programming typically involves solving all possible subproblems rather than a small portion.
Often, dynamic programming algorithms are visualized as "filling an array" where each element of the array is the result of a subproblem that can later be reused rather than recalculated. One way of calculating fibonacci numbers is to use the fact that
fibonacci(n) = fibonacci(n-1) + fibonacci(n-2)
And then write a recursive function such as
int fibonacci(int n)
{
if(n == 0)
{
return 0;
}
if(n == 1)
{
return 1;
}
else
{
return fibonacci(n - 1) + fibonacci(n - 2);
}
}
This algorithm is excruciatingly slow (it will take exponential time to run) because every time fibonacci(n - 1) is calculated, it will recalculate every value of fibonacci(n - 2) to fibonacci(0). The heart of dynamic programming is to avoid this kind of recalculation by saving the results. In the case of fibonacci numbers, other, even simpler approaches exist, but the example serves to illustrate the basic idea.
One use of dynamic programming is the problem of computing "all pairs shortest paths" in a weighted graph. Although Dijkstra's algorithm solves the problem for one particular vertex, it would be necessry to run it for every vertex, and would be a somewhat complicated algorithm. Dynamic programming produces a simpler, cleaner algorithm (though one that is not inherently faster).
The trick here is to find a useful subproblem that, by solving, we can use to solve a larger problem. In this case, it turns out that a useful subproblem is to consider the shortest path for any pair of vertices that goes through only the first k vertices. For simplicity, we'll just number each vertex 1 through n, where n is the total number of vertices. Now, each subproblem is just to find the distance from vertex i to vertex j using only the vertices less than k.
We'll store the distances of all the shortest paths from i to j that don't use k in a two-dimensional array, D_k, indexed by the start and end vertices, storing the value of the shortest path using only vertices numbered less than k as intermediate nodes. (It's OK for one of the vertices to be numbered greater than or equal to k.)
D_k[i, j] = length of shortest path from i to j going through intermediate vertices
less than k
Now, let's say that we've already processed D_k for k from 1 to n. That means we have all of the shortest paths from any two nodes i and j where that path doesn't include any vertex labelled greater than n. Now, we know that any path for vertices i to j that may include node n+1 is either going to be the same as before, or it will include the vertex n+1, in which case the path must be a combination of the shortest path from i to n+1 and from n+1 to j; both of these paths, of course, using intermediate nodes less than n+1 (i.e., using D_n[i, n+1] and D_n[n+1, j].
We can express the definition of D_k in the following form (notice that anything in braces is just being grouped together -- D_{k-1} is the distance array for k-1, for instance).
D_k[i, j] = min(D_{k-1}[i, j], D_{k-1}[i, k] + D_{k-1}[k, j])
Once the definition is expressed in this recursive format, it should be easy to see how this could be coded up -- we just fill a three-dimensional array with rows and columns for each vertex and a depth equal to the number of vertices. But we can actually do a bit better than this! Notice that we never need to keep around more than the current array being calculated and the last array calculated. (In fact, it turns out that all we really need is one array, so we can get away with using O(|V|^2) space.)
The key point to take away is that using dynamic programming, we can reduce the problem of finding all of the shortest paths to solving a series of subproblems that can be reused again and again to solve larger problems. Whenever you try to solve a problem using dynamic programming, think about how you can break the problem up into subparts--perhaps you really only need to computer a small number of subparts (in which case you might be able to use a divide-and-conquer aproach, a la merge sort)--but if you find that you need to know every possible subproblem, then dynamic programming may be the right solution.
Another point to keep in mind is that it often helps to solve simpler versions of the same problem before trying to tackle the full task at hand. For instance, you might have noticed that in the all-pairs shortest paths problem, we actually solved a slightly simpler problem: what is the distance of the shortest path from i to j. This problem is a bit easier to conceptualize and write a recurrence for. But now that we have the idea, it's easy to turn it into a solution to the actual problem in question--we just add in a new array that stores the intermediate node, k, through which the shortest path extends. In this way, the problem is reduced to finding the shortest path from i to k and from k to j, which we can again solve. (The base case here is just when the route between two nodes is a direct path that goes through no additional vertices.)
Moreover, the code for dynamic programming can be surprisingly simple. The pseudocode for the above example can be written in only a few lines, and it turns out that we only need to fill a single array (as long as we fill in a reasonable order).
/* Note that this is c-like pseudo-code */
int D[n][n];
/* initialize D so that each edge, (i, j), has the correct weight at D[i][j]
* and all other elements of the array are set to INFINITY */
/* n is the number of vertices */
for(int k = 0; k < n; k++)
{
for(int i = 0; i < n; i++)
{
for(int j = 0; j < n; j++)
{
if(D[i][j] > D[i][k] + D[k][j])
{
D[i][j] = D[i][k] + D[k][j];
}
}
}
}
2 Komentar untuk "tutorial"
your online teaching is good. i can study much from you.
thanx