[algorithm] Difference between Divide and Conquer Algo and Dynamic Programming

What is the difference between Divide and Conquer Algorithms and Dynamic Programming Algorithms? How are the two terms different? I do not understand the difference between them.

Please take a simple example to explain any difference between the two and on what ground they seem to be similar.

This question is related to algorithm dynamic-programming divide-and-conquer

The answer is


Divide and Conquer

Divide and Conquer works by dividing the problem into sub-problems, conquer each sub-problem recursively and combine these solutions.

Dynamic Programming

Dynamic Programming is a technique for solving problems with overlapping subproblems. Each sub-problem is solved only once and the result of each sub-problem is stored in a table ( generally implemented as an array or a hash table) for future references. These sub-solutions may be used to obtain the original solution and the technique of storing the sub-problem solutions is known as memoization.

You may think of DP = recursion + re-use

A classic example to understand the difference would be to see both these approaches towards obtaining the nth fibonacci number. Check this material from MIT.


Divide and Conquer approach Divide and Conquer approach

Dynamic Programming Approach enter image description here


The other difference between divide and conquer and dynamic programming could be:

Divide and conquer:

  1. Does more work on the sub-problems and hence has more time consumption.
  2. In divide and conquer the sub-problems are independent of each other.

Dynamic programming:

  1. Solves the sub-problems only once and then stores it in the table.
  2. In dynamic programming the sub-problem are not independent.

Divide and Conquer

  • In this problem is solved in following three steps: 1. Divide - Dividing into number of sub-problems 2. Conquer - Conquering by solving sub-problems recursively 3. Combine - Combining sub-problem solutions to get original problem's solution
  • Recursive approach
  • Top Down technique
  • Example: Merge Sort

Dynamic Programming

  • In this the problem is solved in following steps: 1. Defining structure of optimal solution 2. Defines value of optimal solutions repeatedly. 3. Obtaining values of optimal solution in bottom-up fashion 4. Getting final optimal solution from obtained values
  • Non-Recursive
  • Bottom Up Technique
  • Example: Strassen's Matrix Multiplication

sometimes when programming recursivly, you call the function with the same parameters multiple times which is unnecassary.

The famous example Fibonacci numbers:

           index: 1,2,3,4,5,6...
Fibonacci number: 1,1,2,3,5,8...

function F(n) {
    if (n < 3)
        return 1
    else
        return F(n-1) + F(n-2)
}

Let's run F(5):

F(5) = F(4) + F(3)
     = {F(3)+F(2)} + {F(2)+F(1)}
     = {[F(2)+F(1)]+1} + {1+1}
     = 1+1+1+1+1

So we have called : 1 times F(4) 2 times F(3) 3 times F(2) 2 times F(1)

Dynamic Programming approach: if you call a function with the same parameter more than once, save the result into a variable to directly access it on next time. The iterative way:

if (n==1 || n==2)
    return 1
else
    f1=1, f2=1
    for i=3 to n
         f = f1 + f2
         f1 = f2
         f2 = f

Let's call F(5) again:

fibo1 = 1
fibo2 = 1 
fibo3 = (fibo1 + fibo2) = 1 + 1 = 2
fibo4 = (fibo2 + fibo3) = 1 + 2 = 3
fibo5 = (fibo3 + fibo4) = 2 + 3 = 5

As you can see, whenever you need the multiple call you just access the corresponding variable to get the value instead of recalculating it.

By the way, dynamic programming doesn't mean to convert a recursive code into an iterative code. You can also save the subresults into a variable if you want a recursive code. In this case the technique is called memoization. For our example it looks like this:

// declare and initialize a dictionary
var dict = new Dictionary<int,int>();
for i=1 to n
    dict[i] = -1

function F(n) {
    if (n < 3)
        return 1
    else
    {
        if (dict[n] == -1)
            dict[n] = F(n-1) + F(n-2)

        return dict[n]                
    }
}

So the relationship to the Divide and Conquer is that D&D algorithms rely on recursion. And some versions of them has this "multiple function call with the same parameter issue." Search for "matrix chain multiplication" and "longest common subsequence" for such examples where DP is needed to improve the T(n) of D&D algo.


I assume you have already read Wikipedia and other academic resources on this, so I won't recycle any of that information. I must also caveat that I am not a computer science expert by any means, but I'll share my two cents on my understanding of these topics...

Dynamic Programming

Breaks the problem down into discrete subproblems. The recursive algorithm for the Fibonacci sequence is an example of Dynamic Programming, because it solves for fib(n) by first solving for fib(n-1). In order to solve the original problem, it solves a different problem.

Divide and Conquer

These algorithms typically solve similar pieces of the problem, and then put them together at the end. Mergesort is a classic example of divide and conquer. The main difference between this example and the Fibonacci example is that in a mergesort, the division can (theoretically) be arbitrary, and no matter how you slice it up, you are still merging and sorting. The same amount of work has to be done to mergesort the array, no matter how you divide it up. Solving for fib(52) requires more steps than solving for fib(2).


I think of Divide & Conquer as an recursive approach and Dynamic Programming as table filling.

For example, Merge Sort is a Divide & Conquer algorithm, as in each step, you split the array into two halves, recursively call Merge Sort upon the two halves and then merge them.

Knapsack is a Dynamic Programming algorithm as you are filling a table representing optimal solutions to subproblems of the overall knapsack. Each entry in the table corresponds to the maximum value you can carry in a bag of weight w given items 1-j.


Divide and Conquer involves three steps at each level of recursion:

  1. Divide the problem into subproblems.
  2. Conquer the subproblems by solving them recursively.
  3. Combine the solution for subproblems into the solution for original problem.
    • It is a top-down approach.
    • It does more work on subproblems and hence has more time consumption.
    • eg. n-th term of Fibonacci series can be computed in O(2^n) time complexity.

Dynamic Programming involves the following four steps:

1. Characterise the structure of optimal solutions.
2. Recursively define the values of optimal solutions.
3. Compute the value of optimal solutions.
4. Construct an Optimal Solution from computed information.

  • It is a Bottom-up approach.
  • Less time consumption than divide and conquer since we make use of the values computed earlier, rather than computing again.
  • eg. n-th term of Fibonacci series can be computed in O(n) time complexity.

For easier understanding, lets see divide and conquer as a brute force solution and its optimisation as dynamic programming.

N.B. divide and conquer algorithms with overlapping subproblems can only be optimised with dp.


  • Divide and Conquer
    • They broke into non-overlapping sub-problems
    • Example: factorial numbers i.e. fact(n) = n*fact(n-1)
fact(5) = 5* fact(4) = 5 * (4 * fact(3))= 5 * 4 * (3 *fact(2))= 5 * 4 * 3 * 2 * (fact(1))

As we can see above, no fact(x) is repeated so factorial has non overlapping problems.

  • Dynamic Programming
    • They Broke into overlapping sub-problems
    • Example: Fibonacci numbers i.e. fib(n) = fib(n-1) + fib(n-2)
fib(5) = fib(4) + fib(3) = (fib(3)+fib(2)) + (fib(2)+fib(1))

As we can see above, fib(4) and fib(3) both use fib(2). similarly so many fib(x) gets repeated. that's why Fibonacci has overlapping sub-problems.

  • As a result of the repetition of sub-problem in DP, we can keep such results in a table and save computation effort. this is called as memoization

Dynamic Programming and Divide-and-Conquer Similarities

As I see it for now I can say that dynamic programming is an extension of divide and conquer paradigm.

I would not treat them as something completely different. Because they both work by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.

So why do we still have different paradigm names then and why I called dynamic programming an extension. It is because dynamic programming approach may be applied to the problem only if the problem has certain restrictions or prerequisites. And after that dynamic programming extends divide and conquer approach with memoization or tabulation technique.

Let’s go step by step…

Dynamic Programming Prerequisites/Restrictions

As we’ve just discovered there are two key attributes that divide and conquer problem must have in order for dynamic programming to be applicable:

  • Optimal substructure — optimal solution can be constructed from optimal solutions of its subproblems

  • Overlapping sub-problems — problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems

Once these two conditions are met we can say that this divide and conquer problem may be solved using dynamic programming approach.

Dynamic Programming Extension for Divide and Conquer

Dynamic programming approach extends divide and conquer approach with two techniques (memoization and tabulation) that both have a purpose of storing and re-using sub-problems solutions that may drastically improve performance. For example naive recursive implementation of Fibonacci function has time complexity of O(2^n) where DP solution doing the same with only O(n) time.

Memoization (top-down cache filling) refers to the technique of caching and reusing previously computed results. The memoized fib function would thus look like this:

memFib(n) {
    if (mem[n] is undefined)
        if (n < 2) result = n
        else result = memFib(n-2) + memFib(n-1)
        
        mem[n] = result
    return mem[n]
}

Tabulation (bottom-up cache filling) is similar but focuses on filling the entries of the cache. Computing the values in the cache is easiest done iteratively. The tabulation version of fib would look like this:

tabFib(n) {
    mem[0] = 0
    mem[1] = 1
    for i = 2...n
        mem[i] = mem[i-2] + mem[i-1]
    return mem[n]
}

You may read more about memoization and tabulation comparison here.

The main idea you should grasp here is that because our divide and conquer problem has overlapping sub-problems the caching of sub-problem solutions becomes possible and thus memoization/tabulation step up onto the scene.

So What the Difference Between DP and DC After All

Since we’re now familiar with DP prerequisites and its methodologies we’re ready to put all that was mentioned above into one picture.

Dynamic Programming vs Divide-and-Conquer

If you want to see code examples you may take a look at more detailed explanation here where you'll find two algorithm examples: Binary Search and Minimum Edit Distance (Levenshtein Distance) that are illustrating the difference between DP and DC.