Data Structures UNIT-1 Chapter-1

 

Unit 1

Algorithm Analysis:-

Algorithm Analysis: Introduction to Algorithm, Algorithm Analysis, Asymptotic Notations. Introduction to arrays and Abstract Data Type (ADT) Lists: List using arrays and linked list- Singly Linked List, Doubly Linked List, Circular Linked List.

Introduction to Algorithm

Algorithm Definition: - An algorithm is a set of instructions that are used to solve a problem. (Or) An algorithm is a Step By Step process to solve a problem, where each step indicates an intermediate task. Algorithm contains finite number of steps that leads to the solution of the problem.

Properties/Characteristics of an algorithm:-

Algorithm has the following basic properties

1. Input: - An algorithm has zero (0) or more inputs.

2. Output: - An algorithm must produce one (1) or more outputs.

3. Finiteness:-An algorithm must contain a finite/countable number of steps.

4. Definiteness: - Each step of an algorithm must be stated clearly and unambiguously.

5. Effectiveness: - An algorithm should be effective i.e. operations can be performed with the given inputs in a finite period of time by a person using paper and pencil.

 

Categories of Algorithm:

Based on the different types of steps in an Algorithm, it can be divided into three categories, namely

 Sequence

 Selection and

 Iteration

 

Sequence: The steps described in an algorithm are performed successively one by one without skipping any step. The sequence of steps defined in an algorithm should be simple and easy to understand. Each instruction of such an algorithm is executed, because no selection procedure or conditional branching exists in a sequence algorithm.

Example:

// adding two numbers

Step 1: start

Step 2: read a,b

Step 3: Sum=a+b

Step 4: write Sum

Step 5: stop

 

Selection: The sequence type of algorithms are not sufficient to solve the problems, which involves decision and conditions. In order to solve the problem which involve decision making or option selection, we go for Selection type of algorithm. The general format of Selection type of statement is as shown below:

if(condition)

Statement-1;

                                                                         else

Statement-2;

The above syntax specifies that if the condition is true, statement-1 will be executed otherwise statement-2 will be executed. In case the operation is unsuccessful. Then sequence of algorithm should be changed/ corrected in such a way that the system will re-execute until the operation is successful.

 

 

Iteration: Iteration type algorithms are used in solving the problems which involves repetition of statement. In this type of algorithms, a particular number of statements are repeated ‘n’ no. of times.

Example1:

        Example1:

        Step 1 : start

        Step 2 : initialize variables.

        Step 3 : check for condition

        Step 4 : if the condition is true, then go to step5  otherwise goto step7.

        Step 5 : f=f*i

        Step 6 : goto step3

        Step7  : print value of factorial f

        Step8   : stop

 

Performance Analysis (or) Analysis of an algorithm

Analysis of algorithm is the task of determining how much computing time (Time complexity) and storage (Space complexity) required by an algorithm.

 

We can analyze performance of an algorithm using Time complexity and Space complexity.

 

Time Complexity: -  The amount of time that an algorithm requires for its execution is known as time complexity.

Time complexity is mainly classified into 3 types.

            1. Best case time complexity

            2. Worst case time complexity

            3. Average case time complexity

1. Best case time complexity: - If an algorithm takes minimum amount of time for its execution then it is called as Best case time complexity.

Ex: - While searching an element using linear search, if key element found at first position then it is best case. Ex:10,20, 30, 40,50 let the key element is 10

 

2. Worst case time complexity: - If an algorithm takes maximum amount of time for its execution then it is called as Worst case time complexity.

 

Ex: - While searching an element using linear search, if key element found at last position then it is worst case.

 Ex:10,20, 30, 40,50 let the key element is 50

 

3. Average case time complexity: - If an algorithm takes average amount of time for its execution then it is called as Average case time complexity.

 

Ex: - While searching an element using linear search, if key element found at middle position then it is average case. Ex:10,20, 30, 40,50 let the key element is 30

 

There are 2 types of computing times.

            1. Compilation time

            2. Execution time or run time

·       The time complexity is generally calculated using execution time or run time.

·       It is difficult to calculate time complexity using clock time because in a multiuser system, the execution time depends on many factors such as system load, number of other programs that are running.

·       So, time complexity is generally calculated using Frequency count (step count) and asymptotic notations.

 

Frequency count (or) Step count:-

It denotes the number of times a statement to be executed.

Generally count value will be given depending upon corresponding statements.

·       For comments, declarations the frequency count is zero (0).

·       For Assignments, return statements the frequency count is 1.

·       Ignore lower order exponents when higher order exponents are present.

·       Ignore constant multipliers.

 

Ex: - A sample program to calculate time complexity of sum of cubes of ‘n’  natural numbers.

 

int   sum(int  n)                       --------------------0

{                                              --------------------0

            int  i,s;                         --------------------0

            s=0;                             --------------------1

            for(i=1;i<=n;i++)       --------------------(n+1)

{

                        s=s+i*i*i;        --------------------n

            }

return s;                       --------------------1

}           

                                                                      ______

                                                                        2n+3   

                                                                      ______

So, time complexity=O(n)

 

General rules or norms for calculating time complexity:-    

 

Rule 1:-The running time of for loop is the running time of statements in for loop.

            Ex:-  for(i=0;i<n;i++)   -----------(n+1)

                                    s=s+i;    ------------ n

                                                            __________

                                                             2n+1

                                                            __________

   So, time complexity=O(n)

 

Rule 2:-The total running time of statements inside a group of nested loops is the product of the sizes of all loops.

            Ex:-  for(i=0;i<n;i++)                    ---------------- (n+1)       

                           for(j=0;j<n;j++)               ---------------- n(n+1)     

                                    c[i][j]=a[i][j]+b[i][j];  -------------   (n*n)

                                                                                         _____________

                                                                                         2n2+2n+1

                                                                                         _____________                

So, time complexity=O(n2)

 

Rule 3:-The running time is the maximum one.

            Ex:- for(i=0;i<n;i++)                         -----------  (n+1)

         {

                        a[i]=0;                                     -----------     n

                      }

                    for(i=0;i<n;i++)                          ------------  (n+1)

        {

                        for(j=0;j<n;j++)                      ------------  n(n+1)

                         {

                                    a[i]=a[i]+a[j]+i+j;      -------------  n2

                                          }

                                                                                    ____________

                                                                                    2n2+4n+2

                                                                                    ____________

So, time complexity=O(n2)

 

Rule 4:-   if-else       

              if(cond)

                        s1

             else

                        s2

Here running time is maximum of running times of s1 and s2.

  Ex:-   int sum(int  n)                        --------------   0

            {                                              ---------------  0

 int  s,i;                        --------------   0

                        s=0;                             --------------   1

                        if(n<=0)                      --------------   1

                                    return  n;         -------------     1

                        else                              -------------    0

                        {                                  -------------    0

                                    for(i=0;i<n;i++)  --------      n+1

                                                s=s+i;      ----------  n

                                    return  s;          -----------     1

                        }

            }

                                                            ____________

                                                                        2n+5

                                                            __________________

So, time complexity=O(n)

 

What is Space complexity?

When we design an algorithm to solve a problem, it needs some computer memory to complete its execution. For any algorithm, memory is required for the following purposes...

  1. To store program instructions.
  2. To store constant values.
  3. To store variable values.
  4. And for few other things like funcion calls, jumping statements etc,.

Space complexity of an algorithm can be defined as follows...

Total amount of computer memory required by an algorithm to complete its execution is called as space complexity of that algorithm.

Generally, when a program is under execution it uses the computer memory for THREE reasons. They are as follows...

  1. Instruction Space: It is the amount of memory used to store compiled version of instructions.
  2. Environmental Stack: It is the amount of memory used to store information of partially executed functions at the time of function call.
  3. Data Space: It is the amount of memory used to store all the variables and constants.

 Note - When we want to perform analysis of an algorithm based on its Space complexity, we consider only Data Space and ignore Instruction Space as well as Environmental Stack.
That means we calculate only the memory required to store Variables, Constants, Structures, etc.,

To calculate the space complexity, we must know the memory required to store different datatype values (according to the compiler). For example, the C Programming Language compiler requires the following...

  1. 2 bytes to store Integer value.
  2. 4 bytes to store Floating Point value.
  3. 1 byte to store Character value.
  4. 6 (OR) 8 bytes to store double value.

 

 

 

Consider the following piece of code...

Example 1

int square(int a)

{

        return a*a;

}

In the above piece of code, it requires 2 bytes of memory to store variable 'a' and another 2 bytes of memory is used for return value.

That means, totally it requires 4 bytes of memory to complete its execution. And this 4 bytes of memory is fixed for any input value of 'a'. This space complexity is said to be Constant Space Complexity.

Example 2

int sum(int A[ ], int n)
{
   int sum = 0, i;
   for(i = 0; i < n; i++)
      sum = sum + A[i];
   return sum;
}
n*2' bytes of memory to store array variable 'a[ ]'
2 bytes of memory for integer parameter 'n'
4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each)
2 bytes of memory for return value.
 

That means, totally it requires '2n+8' bytes of memory to complete its execution. Here, the total amount of memory required depends on the value of 'n'. As 'n' value increases the space required also increases proportionately. This type of space complexity is said to be Linear Space Complexity.

If the amount of space required by an algorithm is increased with the increase of input value, then that space complexity is said to be Linear Space Complexity.

 

 

 

 

 

 

 

 

Asymptotic notations

 

Asymptotic notations are used to calculate time complexity of algorithm.

Using asymptotic notations we can calculate best case, worst case and average case time complexity.

There are 5 types of time complexities.

1. Big Oh notation   (O)

2. Big Omega notation (Ω)

3. Theta notation (θ)

4. Little Oh notation (o)

5. Little omega notation (ω)

 

1. Big Oh notation (O):- 

·       It is a method of representing upper bound of algorithm’s running time.

·       Using Big Oh notation we can calculate maximum amount of time taken by an algorithm for its execution.

·       So, Big Oh notation is useful to calculate worst case time complexity.

 

Definition: -   Let f(n) , g(n) be 2 non negative functions. Then f(n)=O(g(n)), if there are 2 positive constants c, n0  such that  f(n)<=c*g(n)   n>=n0.

 

 

Graphical representation of Big Oh notation(O):-

 

           

A picture containing diagram

Description automatically generated

 

 

Ex:- f(n)=3n+2

        g(n)=n;

        To show   f(n)=O(g(n))

            f(n)<=c*g(n)    ,  c>0, n0>=1

            3n+2<=c*n

        Let C=4, n0=1   then 5<=4        wrong

                       n0=2   then 8<=8        correct

                       n0=3   then 11<=12    correct

                       n0=4   then 14<=16    correct

        So,   n0>=2

     Hence, 3n+2<=4*n    n>=2

 

 

2. Big Omega notation (Ω):-

 

·       It is a method of representing lower bound of algorithm’s running time.

·       Using Big Omega notation we can calculate minimum amount of time taken by an algorithm for its execution.

·       So, Big Omega notation is useful to calculate best case time complexity.

 

Definition: -   Let f(n) , g(n) be 2 non negative functions. Then f(n)= Ω (g(n)), if there are 2 positive constants c, n0  such that  f(n)>=c*g(n) n>=n0.

 

Graphical representation of Big Omega notation(Ω)

 

            Chart, diagram, line chart

Description automatically generated

f(n)= Ω (g(n))

 

Ex:- f(n)=3n+2

        g(n)=n;

        To show   f(n)= Ω(g(n))

            f(n)>=c*g(n)    ,  c>0, n0>=1

            3n+2>=c*n

      Let C=1, n0=1   then  5>=1    correct

                    n0=2   then  8>=2    correct

                    n0=3   then  11>=3    correct

                

        So,   n0>=1

     Hence, 3n+2>=1*n    n>=1

 

 

3. Theta notation (θ):-

 

·       It is a method of representing an algorithm’s running time between upper bound and lower bound.

·       Using Theta notation we can calculate average amount of time taken by an algorithm for its execution.

·       So, Theta notation is useful to calculate average case time complexity.

 

Definition: -   Let f(n) , g(n) be 2 non negative functions. Then f(n)= θ (g(n)), if there are 3 positive constants c1, c2,n0  such that  c1*g(n)<=f(n)<=c2*g(n)   n>=n0.

 

Graphical representation of Theta notation(O):-

 

Diagram

Description automatically generated

 

Ex:- f(n)=3n+2

        g(n)=n;

        To show   f(n)= θ(g(n))

            c1*g(n)<=f(n)<=c2*g(n),  c1>0,

                                                      c2>0

                                                       n0>=1

            c1*n<=3n+2<= c2*n

            c2=4 ,  c1=1,

                 Let n0=1   then  1<=5<=4    wrong

                 Let n0=2   then  2<=8<=8    correct

                 Let n0=3   then  3<=11<=12    correct

                 Let n0=4   then  4<=14<=16    correct

        So,   n0>=2

     Hence, 1*n<=3n+2<=*4n    n>=2

 

 

4. Little oh notation(o):-

                      Let f(n), g(n) be 2 non negative functions then f(n)=o(g(n)) such that

Lt n=infinity f(n)/g(n)==0

 

5. Little Omega notation(ω):-       

Let f(n), g(n) be 2 non negative functions then f(n)=o(g(n)) such that         Ltn->infinity g(n)\f(n)=0.                                                                                                                                

                       

 

Various types of computing times or typical growth rates:-

 

O(1)    à constant computing time

O(n)    à linear computing time

O(n2)  àQuadratic computing time

O(n3)  àCubic computing time

O(2n)  àExponential computing time

O(log n) à Logarithmic computing time

O(n log n) àLogarithmic computing time

 

N

n2

n3

2n

logn

nlogn

1

1

1

2

0

0

2

4

8

4

1

2

4

16

64

16

2

8

8

64

512

256

3

24

 

The relation among various computing times is,

O(logn)<O(n)<O(nlogn)<O(n2)<O(2n)<O(n3)

 

 

Comments

Popular posts from this blog

Unit-2 chapter-1 STACKS AND ITS APPLICTIONS

Conversion of Infix to Postfix Notation lab program