There are various forms of algorithms that can be applied to solve challenging problems. This article discuss about a set of algorithms that can solve any solvable problem, although, some more efficiently than others. However, there exists other algorithms that use various forms of heuristics (artificial intelligence) that can provide clues to the program to help it solve the problem much quicker. Those types of algorithms are beyond the scope of this article.
The Brute Force algorithm is a form of exhaustive search of the problem space for any solution. In the worst case, the entire problem space must be searched. Hence, this form algorithm is the very primitive form that other algorithms (Branch and Bracktrack and others) are based on.
In a nutshell, given a particular problem, any and all possible solution (a permutation of the elements that make up a solution) is tested to see if it solves the problem.
Fourtunately, this form of algorithm can detect if a problem is solvable or not. Since if no solution is discovered after an exhaustive search, then it proves that the problem is unsolvable given the particular search space.
This algorithm recursively partitions the problem into smaller and smaller problem spaces. At the base case, it determines if the singularity problem space solves the sub-problem. Then it returns that sub-solution and joins it with the other partitioned sub-solutions. The Dynamic Programming algorithm is based off this approach, however, it is slightly different (explained below).
For example, suppose we are given two locations, one start and one destination position on a map. We want to determine the best route to travel. By applying this algorithm, we can partition the map into smaller and smaller problems and solve each sub-problem and join the routes.
Typically, this algorithm may not yield the best solution but it often yields an optimal solution (one that is very close to the best solution).
The Branch and Bracktrack algorithm is very close to a heuristic approach; however, it is not, because it can reliable determine a solution to a solvable problem.
The algorithm partitions the problem space by decisions (very similar to the Divide and Conquer algorithm). For every decision it makes, it tries the opposite decisions as well. Often, decisions are either of yes and no questions, which branches the problem into two parts, though it is not uncommon to have more branches. Thus, a binary tree is typically formed. For most cases, depth first search is performed but there are cases where breadth first search is used instead. It depends on the where the solution is most likely to occur.
The backtracking of this algorithm occurs when it determines that for a particular decision, a solution is not possible. So, the algorithm needs to return (backtrack) to a previous decision and try the other options. For example, when making change for $0.99, the algorithm could have made the decision to try using one dollar coins, which it determines is not possible, thus it backtracks and tries the quarter coins.
The Greedy algorithm works by taking the largest element from the problem space that most closely matches the optimal solution during each iteration. A common scenario (that when human apply this algorithm) is making change.
Consider finding the optimal way to make $0.49. Normally, we would start with the largest denominator, the quarter and work towards using pennies. This is how the Greedy algorithm would behave. Thus, we would take 1 quarter, 2 dimes, and 4 pennies, a total of 7 coins.
The Dynamic Programming algorithm is very similar to the Divide and Conquer algorithm, except at each level of the sub-problem, it finds the most optimal solution to the sub-problem. This algorithm is very good because it finds the optimal solution at each level. For example, an optimal solution at a lower level may not contribute to the optimal level at a higher level.
Unlike the Greedy algorithm, this algorithm can recover from a poorly chosen subpar optimal solution to a sub-problem. It does this by storing any interesting sub-solutions as it returns from each level. Potentially, each interesting sub-solution may lead to a better solution at a higher level. Contrasted, the Greedy algorithm is stuck with every decision it made previously.
BLZCOg Awesome article post.Thanks Again. Fantastic.
Major thankies for the blog article.Much thanks again. Awesome.
wow, awesome blog post.Much thanks again. Will read on...
Really appreciate you sharing this blog.Thanks Again. Want more.
Fantastic blog post.Much thanks again. Really Cool.