Dynamic Programming is a general algorithm design for solving problems defined by formulated as recurrences with overlapping sub instances.
Idea:
- set up a recurrence relating a solution to the larger instances to solutions of some smaller instances.
- solve smaller instances once.
- record solutions in a table.
- extract solution to the initial instance from that table.
History:
Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems.
It is somewhat similar to the divide and conquer concept.
There are three main parts/concepts in dynamic programming:
- Divide and Conquer
- Memoization
- Recurrence
In the divide and conquer technique, the problem is divided into various small problems (known to be subproblems) and after computing, the solution for all the subproblems is the joined in an order to get the solution for the original problem.
But in dynamic programming, the solutions for subproblems are reused for another subproblem to reduce the time taken to execute and this is called as "Memoization". Thus, the time complexity is reduced in dynamic programming.
The function used to solve the subproblems is called recurrence relation for the other subproblems until the relevant solution is out.
Advantages:
- CPU optimization because of reusing the solutions of subproblems.
- Less time complexity.
- Fast and takes fewer resources.
Disadvantages:
- Memory space used for storing the output of subproblems is large in some cases.
- It can be in the problem which can be divided into subproblems.
- Algorithms use dynamic programming are knapsack problem, Fibonacci series, N queen problems, and other problems.
0 Comments