- Design and Analysis of Algorithms
Introduction to Analysis of Algorithms
An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. Informally, an algorithm is any well defined computational procedure that takes some value or set of values as input and produces some value or set of values as output. Thus, an algorithm is a sequence of computational steps that transform the input into the output.
There are some terms related to the algorithms are given as follows
Computational Problem
A computational problem is a specification of the desired input-output relationship. It focuses on classifying computational problems according to their inherent difficulty and relating those classes to each other.
Instance
An instance of a problem is all the inputs needed to compute a solution to the problem. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance.
Algorithm
An algorithm is a well defined computational procedure that transforms Inputs into outputs, achieving the desired input-output relationship.
Correct Algorithm
A correct algorithm halts with the correct output for every input instance. We can say that the algorithm solves the problem.
Analysis of Algorithms
Analysis of algorithm depends upon various factors such as memory, communication bandwidth or computer hardware. The most often used for analysing of algorithms is the computational time that an algorithm requires for completing the given task. By counting the number of steps, the runtime of an algorithm is measured.
Complexity
Algorithm can be classified by the amount of time they need to complete compared to their input size.
The analysis of an algorithm focuses on the complexity of algorithm.
Time Complexity
The time complexity is a function that gives the amount of time required by
an algorithm to run to completion. The time complexity quantifies the
amount of time taken by an algorithm to run as a function of the length of
the string representing the input..
Space Complexity
The space complexity is a function that gives the amount of space required by an algorithm to run to completion.
Key Points
- We usually refer the time complexity in three terms
- Best case time complexity
- Average case time complexity
- Worst case time complexity
- Worst case time complexity It is the function defined by the maximum amount of time needed by an algorithm for an input of size
- Average case time complexi ty It is the execution of an algorithm having
typical input data of size n.
- Best case time complexity It is the minimum amount of time that an algorithm requires for an input of size n.
Asymptotic Analysis
This can be done by considering the following analysis
- It describes the behaviour of function in the limit.
- There are multiple asymptotic notations those compare the sizes of
0 (Big Oh) .5_ {Asymptotic upper bound}
g2 (Big Omega) {Asymptotic lower bound}
0 (Big Theta) — = {Asymptotic tight bound}
o (Small Oh) < {Tight upper bound}
co (Small Omega) —- > {Tight lower bound}
Asymptotic Notations-
The notations we use to describe the asymptotic running time of an algdrithm are defined in terms of functions whose domains are the set of natural numbers N = {0,1, 2…}
The asymptotic notations consist of the following useful notations
Big Oh (0)
If we write f(n) = 0(g (n)), then there exists a function f(n) such that
f(n) cg (n)
with any constant c.
Or we can say g(n) is an asymptotic upper bound for f(n). e.g., 2n2 = 0(n3) with constant c = 1.
Big Omega (Q)
If we write f (n) = E2 (g (n)), then there exists a function f (n) such that
f(n) cg (n)
with any constant c.
Function g(n) is an asymptotic lower bound for f(n).
e.g., 1177 = E2 (log2 n) with constant c = 1.
Big Theta ( )
If we write .f(n) =0 (g (n)), then there exists a function f (n) such that
c1g (n) < f (n) c 2g (n)
with any positive constants c1 and c2.
Function g (n) is an asymptotically tight bound for f (n). ,
Theorem | f(n) = 9 (g (n)) if and only
if f = 0 (g (n)) and f (n) = (g(n)), |
Small Oh (o)
Notation If we write f (n) =ω (g (n)), then there exists a function such that
f(n) > c g(n),with any positive constant c.
Function g(n) is an asymptotically tight upper bound of f(n),
e.g., 1. n1.99999 = o (n2) 2. =o(n2)
Small Omega (0))
Notation If we write f (n) = w (g (n)), then there exists a function such that f(n) > cg (n),– with any positive constant c.
g(n) is asymptotically tight lower bound of (n).
e.g,, n2.00001 = w (n2)
and n2≠ω (n2)
Comparisons of a Symptotic Notations Based on the Relational Properties
Assume, f(n) and g(n) are as asymptotically positive.
The relational properties are given as under
Transitivity
f (n) = (g (n)) and g (n) = (h (n))
f (n) = (h(n))
This property is same for 0, Q, o and co .
Reflexivity
f (n) = 0 (f (n)) This property is same for 0 and Q.
Symmetry
f (n) = 0 (g (n)) if and only if g (n) = 0 (f (n))
Transpose Symmetry
f (n) = 0 (g (n)) if and only if g (n) = Ω (f (n))
f (n) = 0 (g (n)) if and only if g (n) = w(f (n))
Recurrences
A recurrence is a function, defined in terms of
- One or more base cases Itself with smaller arguments
T(n)= 1 if n =1,
T(n — 1) + 1 if n > 1
T(n)= n
In algorithm analysis, we usually express both the recurrence and its solution using asymptotic notation.
Methods to Solve the Recurrence Relation
There are two methods to solve the recurrence relation given as 1. Substitution method 2. Master method |
Substitution Method
The substitution method is a condensed way of proving an asymptotic bound on a recurrence by induction. In the substitution method, instead of trying to find an exact closed form solution, we only try to find a closed form bound on the recurrence.
There are two steps in this method
- Guess the solution
- Use induction to find the constants and show that the solution works.
Key Points |
- The substitution method is powerful approach that is able to prove upper bounds for almost all recurrences.
- The substitution method requires the use of induction.
Master Method
The master method gives us a quick way to find solutions to recurrence relations of the form T(n) = aT (n/ b) + f(n).
where, a and b are constants, a ≥ 1 and b > 1 . Here, a represents how many recursive calls are made, b represents the factor by which the work is
reduced in each recursive call. f(n) represents how much work is done by each call.
Solution of Recurrence Relation
- The solution of the recurrence relation will be in the form of
f(n)
T(n)= nlogbba [U(n)]; h(n)=
nlogh a
■ Now, the values of U(n) and h(n) are related in three cases as follows
Case 1 If h(n) = nr, r < 0, then U (n) = 0 (1)
Case 2 If h(n) = nr, r > 0, then U (n) = 0 (d)
Case 3 If h(n) = (log2 n)1, i > 0, then U (n) =0(log 2 n)i +1/ i+ 1
T(n)= 0 (n log n)