**CHAPTER 3 PROPOSED ALGORITHMS**

**3.3. GREEDY ALGORITHMS**

3.4. The argmin within step 5 replaced to argmax in case of maximization, which chooses a candidate component of maximum cost. The following display instances of greedy algorithms during any combinatorial optimization problems.

The greedy method used in optimization problems which include searching through the collection of arrangements for getting an individual which maximizes or minimizes the objective function represented at those arrangements. In progression for solving a presented optimization problem, solution progresses through a series of opportunities.

This series begins with any well-defined beginning arrangement and later performs choices which looks the greatest which are achievable by the greedy method not driven on the optimal solution. However, there are various difficulties which it takes the trial to, including so they suppose problems to hold the greedy-pick quality. That is the quality that an optimal global arrangement able to achieve through a list of optimal decisions (that is, decisions which are the greatest from between the opportunities possible to the point), beginning of a well-comprehend arrangement [140].

**3.3.1 ** **ELEMENTS OF THE GREEDY STRATEGY **

Generating the series of options of the problem that obtains an optimal solution is a technique used by Greedy search. The execution of the selection part at each opportunity picks a better alternative in the current state. The optimal solution is not obtained always by a heuristic approach. In this part, we present several characteristics from greedy methods.

The technique for developing a greedy algorithm can expresse within the following levels:

*Begin Greedy: *

*1 ᶘ ←*∅;

*2 f(ᶘ)←0; *

*3 ƒ← {i ϵ E: ᶘ U {i} is not infeasible}; *

*4 while ƒ≠ *∅ do

*5 i*^{*}*← argmin {c**i**: i ϵ ƒ}; *

*6 ᶘ ← ᶘ U {i*}; *

*7 f(ᶘ)← f(ᶘ)+c**i***; *

*8 ƒ← {i ϵ ƒ\{i*}: ᶘ U {i} is not infeasible}; *

*9 end while; *

*10 return ᶘ, f(ᶘ); *

*End Greedy.*

**Figure 3. 4**Pseudo-code of a greedy algorithm for a
minimization problem

1. Define some optimal substructure of the problem.

2. Enhance the recursive solution.

3. While presenting the greedy selection, after that only individual sub-problem remains.

4. Show that this selection preserved for making the greedy decision. (Steps 3 and 4 can happen during each series.)

5. Enhance a recursive algorithm which performs the greedy approach.

6. Switch this recursive algorithm into the solution algorithm.

For instance, during the activity-selection problem, first set that Sij is a part of the problem, wherever both i and j are diverse. After having discovered which performed the greedy selection, they could check those parts of difficulties to the act of the frame Sk . Optionally, they could hold the optimal substructure among a greedy range in memory, after that the selection moves just single sub-problem for solving. Later, they should own established that a greedy selection (the initial activity Si to the end in Sk), joined among an optimal solution. Further, usually, greedy algorithms approving on the subsequent steps:

1. Calculate the optimization problem within which they get a selection also moved by individual subproblem for solving.

2. Explain that there is permanently the optimal solution for that original problem that produces the greedy selection so that the greedy selection is forever protected.

3. Express optimal sub-structure through explaining that, should have present the greedy selection, anything remains is a subproblem including the characteristic that if they join an optimal solution on the sub-problem including the greedy range that has performed, the report through an optimal solution on the initial problem.

The primary component is the greedy-selection characteristic: that ability to construct a globally optimal solution through producing locally optimal (greedy) selections. In another term, if they regard that selection for creating, the choice which seems best for the current problem, without concerning the effects of sub-problems. Here is where greedy algorithms differ from dynamic programming. In dynamic programming, they

get a selection in every round, just some selection regularly based on the solutions to subproblems. In a greedy algorithm, they execute whatever collection looks greatest at the time also later determine the subproblem which remains. The selection produced through a greedy algorithm shall base on choices so far, just it not able to based on several prospective preferences or at the clarifications to subproblems.

**3.3.2 ** **OPTIMAL SUBSTRUCTURE **

A problem presents optimal substructure while the optimal solution to the problem includes inside the optimal solutions through subproblems. The characteristic does a principal component during testing the applicability to dynamic programming considering greedy algorithms. While an instance from the optimal substructure, remember what we showed in the previous section which an optimal solution to subproblem Sij involves action ak, later that should also include optimal solutions on the subproblems Sik and Skj. Presented the optimal substructure, it explained that when they knew which the action that uses as ak, they could build the optimal solution on Sij

through picking ak including every step into optimal solutions on the subproblems Sik

and Skj.

An extra straightforward strategy can be applied concerning optimal substructure if using it on greedy algorithms. While discussed earlier, it has the benefit of considering sub-problems becoming established as the greedy selection in the initial problem. Each requirement shows that an optimal solution on the subproblem joined among the greedy choice executed, results in an optimal solution for the original problem. The design uses inference upon those subproblems to show that presenting the greedy opportunity in each step provides an optimal solution [141].