When you hear the name dynamic programming, it may have deterred you from going forward. Unlike recursion, backtracking, greedy algorithms, etc., we can know them by their names, while dynamic programming is different. We might confuse and misunderstand it. It is an essential chapter in Introduction to Algorithms. And it’s also an optimization algorithm with crucial applications in machine learning algorithms. Today, the share from AlgoMonster is to show you how to get into dynamic programming thinking.

What is dynamic programming and operations research?

Before explaining the important idea, let’s talk about the subject “operations research”first. It is a discipline of applied mathematics that uses statistics and mathematical models. So that it can find optimal or near-optimal solutions to complex problems. Operations Research can solve difficult real-life situations, significantly improving or optimizing existing systems’ efficiency.

Dynamic programming is an essential branch of Operations Research. This optimization method transforms a multi-stage decision process into a series of single-stage problems solved one by one. This idea results from the American mathematician Richard Bellman’s research in the 1950s on solving multi-stage decision problems. The other name is the Principle of Optimality. As Richard Bellman himself said, we should not speculate on any details from its name. He and other staff chose to hide specific facts when applying for government grants.

How to think dynamic programming

First, you should not take the name dynamic programming at face value because the name is not related to the idea. Instead, it is better to come up with a name that reminds you of the idea, such as the recursive formula method, the state transfer equation method, and so on.

Second, instead of saying that dynamic programming is an algorithm, it is more like a methodology for solving problems.

Third, the general form is to find the optimal value, such as the longest common subsequence, the maximum sub-segment, optimal binary search tree, etc.

The basic idea of dynamic programming

The basic idea is to decompose the problem into several subproblems to obtain the original problem’s solution from these subproblems’ solutions. However, unlike the partitioning method, the subproblems obtained by decomposing the problem suitable for dynamic programming are often not independent. For example, suppose the partitioning method can solve such issues. In that case, the number of decomposed subproblems is so large that it takes exponential time to solve the original problem in the end.

However, the number of different subproblems is often only polynomial in magnitude.

When solved by the partitioning method, some subproblems are repeatedly settled many times. Suppose we can save the answers to the already solved subproblems and find the solved answers again when needed. In that case, we can avoid a lot of repeated computations and thus obtain an algorithm with polynomial time complexity. A table can record the answers of all solved sub-problems to achieve the sub-purpose, and the results are filled into the computed table, no matter if the sub-problem is used later or not. It is what the basic idea is.

At first, decompose the problem into several subproblems, solve the subproblems first, and then obtain the solution of the original problem from the solutions of these subproblems.

Second, the subproblems obtained by decomposition are often not independent of each other.

Finally, the answers to the solved subproblems are saved to avoid double counting.

The main point must be memorized, although we will see a variety of future examples corroborated.

The essential elements we should know

The dynamic programming algorithm decomposes the problem into several subproblems, first solve the subproblems and save the answers of the subproblems to avoid repeated computations, and then obtain the solution of the original problem from the solutions of these subproblems. Dynamic programming requires two essential elements to determine whether a problem can be solved by it: the “optimal substructure property” and the “overlapping subproblem property.”

Editing distance

When it comes to editing distance, you may be familiar with it. In Natural Language Processing (NLP), edit distance is a fundamental distance calculation method for computing text similarity. In simple terms, it is the number of operations that can only change a string into another string by replacing, deleting, or inserting it. And the process of finding the minimum edit distance is a typical dynamic programming process.

If calculating the edit distance between two strings is the parent problem, then the subproblem is how to find the edit distance between smaller rows. Then, as mentioned above, dynamic programming should split the parent problem into subproblems and write down the solutions of the subproblems to save space and time.

Optimal substructure property

The first step in designing a dynamic programming algorithm is usually to characterize the optimal solution structure. A problem has the optimal substructure property, especially when its optimal solution contains the optimal solutions of its subproblems. Thus, the optimal substructure property of a situation provides an essential clue to how dynamic programming can solve the problem.

Overlapping Subproblem Property

When a problem is solved top-down using a recursive algorithm, the subproblem generated each time is not always a new problem, and some subproblems are computed several times. It takes advantage of this overlapping subproblem property by solving each subproblem only once and then saving the solution in a table. When we resolve the subproblem again, we can view the result in constant time.

Who doesn’t know about overlapping subproblems? But still, let’s talk about it, more connections!

Final summary

The subproblems obtained by the decomposition of dynamic programming are often dependent on each other. For example, suppose the solutions of the decomposed subproblems are independent of each other, such as Binary Search. In that case, the decomposed subproblems are independent of each other. There is no overlapping subproblem, so it is not suitable for dynamic programming but more suitable for partitioning algorithms. The Fibonacci series problem is more suitable actually. However, strictly speaking, the solution of the Fibonacci series is still not the universal use of dynamic programming (the general form is to find the optimal value!). Nevertheless, the Fibonacci series solution helps understand the “overlapping subproblem property”.

RELATED ARTICLES

Leave a Reply

Your email address will not be published.