Complexity of Algorithm
Introduction
- Complexity simply means difficulties in a process.
Definition
- Algorithm complexity is a measure of the resources required by an algorithm as a function of input size to solve a problem. The two main resources are time (how long the algorithm takes to run) and space (how much memory it uses).
- The complexity of an algorithm(M) is the function [f(n)] which gives the running time and/or storage space requirement of the algorithm in terms of the size(n) of the input data.
Characteristics
- Complexity is usually expressed as a function of the input size, denoted by n.
- Generally, complexity refers to the running time (i.e., Time Complexity) of the algorithm.
- Algorithms with less time complexity and less memory space are considered better algorithms.
- The complexity function [f(n)] gives the running time of an algorithm, which depends not only on the size ‘n’ of the input data but also on the nature of the particular data.
Advantages
- Algorithm complexity helps determine whether an algorithm is practical and scalable, particularly for large input sizes.
- Helps compare algorithms.
- Predicts performance for large inputs
- Guides the selection of efficient solutions
- Essential in TOC and algorithm design
Type of Complexity
[1] The Performance analysis (effectiveness and efficiency) of an algorithm is calculated and depends mainly on two factors: the memory/space required and the execution time. On this basis, the complexity of an algorithm may be of two types –
(A) Space Complexity
- The space complexity of a program or an algorithm is the amount of memory space(maximum) required or used by the algorithm to run to its completion as a function of the input size. This includes the memory for the input data, auxiliary data structures, and the call stack (for recursive algorithms).
- The space needed by a program has the following components: –
- Instruction space: Instruction space is the space needed to store the compiled version of the program instructions. The amount of instruction space that is needed depends on some factors as: –
• The compiler is used to complete the program into machine code.
• The compiler options in effect at the time of compilation.
• The target computer. - Data space: Data space is the space needed to store all constant and variable values. Data space has two components-
(i) Space needed by constants and simple variables in the program.
(ii) Space needed by dynamically allocated objects such as arrays and class instances. - Environment stack space: The environment stack is used to save information needed to resume execution of partially completed functions.
- Instruction space: Instruction space is the space needed to store the compiled version of the program instructions. The amount of instruction space that is needed depends on some factors as: –
(B)Time Complexity
-
The time complexity of a program/an algorithm is the amount of processing time required by an algorithm to run to completion.
- In other words, time complexity is the number of elementary operations (like additions, multiplications, comparisons) performed by an algorithm as a function of the input size.
- Also, the number of machine instructions that a program executes during its running time is called its time complexity. This number depends primarily on the size of the program’s input and the number of operations it will take. Time taken by a program is the sum of the compile time and the run time. In time complexity, the time(tp) taken by a program(P) is the sum of the Compile time & the Run (execution) time. The Compile time does not depend on the instance characteristics (i.e., no. of inputs, no. of outputs, magnitude of inputs, magnitude of outputs, etc.). Thus, we are concerned with the running time of a program only.
- The time complexity is expressed using asymptotic notation (like Big O) to describe the upper bound of the growth rate of the running time. The limiting behavior of the complexity as size increases is called the asymptotic time complexity. It is the asymptotic complexity of an algorithm that ultimately determines the size of problems that can be solved by the algorithm.
-
Common Time Complexities (from fastest to slowest)
-
O(1): Constant time – the running time does not depend on the input size (like Array indexing, hash table lookup, etc.).
-
O(log n): Logarithmic time – often seen in divide and conquer algorithms/halves the problem size each step (like binary search, etc.).
-
O(n): Linear time – the running time grows linearly with the input size, i.e., Runtime proportional to input size (like a simple loop- Linear search, traversing linked list, counting elements).
-
O(n log n): Linearithmic time – common in efficient sorting algorithms (like merge sort, heap sort, quick sort).
-
O(n^2/n2): Quadratic time – often seen in nested loops (like bubble sort, selection sort, naive matrix multiplication).
-
O(n^3/n3): Cubic time – often seen in algorithms with three nested loops (like naive matrix multiplication, 3D grid processing).
-
O(2^n/2n): Exponential time – typically doubles with each additional input, as in brute-force algorithms (like the naive recursive solution for the Fibonacci sequence, traveling salesman (brute force)).
-
O(n!): Factorial time – extremely slow. Grows faster than exponential, generating all permutations, often seen in brute-force solutions for the traveling salesman problem.
-
-
Calculation of Time Complexity:
- The time complexity is measured by counting the number of key operations involved in the process; for example, in sorting and searching algorithms, the number of comparisons is the number of key operations. In other words, the time complexity of an algorithm is calculated by the no. of steps(count) taken by the algorithm to compute the function it was written for or to complete its task. The number of steps is itself a function of the instance characteristics.
- To calculate the time complexity of a program, we will measure the complexity for every step in the program and then calculate the overall time complexity.
- In time complexity, we consider run time only.
- The time required by an algorithm is determined by the number of elementary operations.
- The following conventions are taken into consideration to calculate the running time complexity of a program: –
- Assigning a value to a variable
- Calling a function
- Performing an arithmetic operation
- Comparing two variables
- Indexing into an array following a pointer reference
- Returning from a function
- For example:
(i)To determine the running time of this program, we have to count the number of statements that are executed in the procedure.
The total time complexity is = O(1) + O(n) * O(1)* O(1) + O(1) = O(n).
(2) There are 3 cases, in general, to find the complexity function f(n) of an algorithm. On this basis, the complexity may exist in three forms –
(A) Best Case (B) Average Case (C) Worst Case
(A) Best Case
-
The best-case algorithm executed in the minimum number of steps for a given set of parameters.
-
The minimum possible value of f(n) for any possible input is called the best case.
- The best-case algorithm is the arrangement of data for which the algorithm performs best.
- To calculate the best-case complexity of a program, we count the minimum number of steps that are executed for the completion of the given parameter/program.
(B) Average Case
-
The average case algorithm executed in the average number of steps for a given set of parameters.
- The value of f(n) is between the maximum and the minimum for any possible input.
- Generally, the Average case implies the expected value of f(n).
- To calculate the average case complexity of a program, we count the average number of steps that are executed for the completion of the given parameter/program.
- The analysis of the average case assumes a certain probabilistic distribution for the input data. One such assumption might be that all possible permutations of an input data set are equally likely.
- The Average case also uses the concept of probability theory. Suppose the numbers N1, N2, …… Nk occur with respective probabilities p1,p2,…, pk. Then the expectation or average value of E is given by E = N1p1+N2p2+ …………. +Nkpk.
(C) Worst Case
-
The worst-case algorithm executed in the maximum number of steps for a given set of parameters.
- The maximum possible value of f(n) for any possible input is called the worst case.
- To calculate the worst-case complexity of a program, we count the maximum number of steps that are executed for the completion of the given parameter/program.
Example to explain Best, Average, and Worst case:
Example1:
To explain the Best, Worst, and Average cases of an algorithm, we consider an example of a linear array A[1…..n], where the array(A) contains n elements. Suppose we want to search for a specific element(x) in the array(A) with its location or index value(LOC). Here, the linear search algorithm solves this problem by comparing the given value x, one by one, with each element in A. That is, we compare x with A[1], then A[2], and so on, until we find a location (LOC) or not such that x=A[LOC] or Not(x=[LOC]).
The complexity of the search algorithm is given by the number(C) of comparisons between x and array elements A[K].
Best case: In this case, the searching element x is found at the first element in the array A. That is x=A[LOC], which is C(n) = 1.
Worst case: In this case, the searching element x is found at the last location/element in the array A, or x is not present in the given array A. In this case, we have C(n) = n.
Average case: Here, we assume that the searched element x must appear in the array A, and it is equally likely to occur at any position in the array A. Here, the number of comparisons can be any of the numbers from 1 to n, and each number occurs with the probability p=1/n, then C(n) = (n+1)/2. It means that the average number of comparisons needed to find the location of x is approximately equal to half the number of elements in array A. Thus, the complexity of an algorithm in the average case is much more complicated to analyze than that of the worst case. Unless otherwise stated or implied, we always find and write the complexity of an algorithm in the worst case.
Example2:
Best Case: Clearly, the best case occurs when we divide the array 1st time, and we get the searched element x. This requires only one step (the minimum possible step). In this case, the number of comparisons, C(n) = 1= O(1).
Worst case: Suppose we are searching for a given key x and that x is either not in a given array A or searching for that element x takes the maximum possible steps. Clearly, the worst case occurs when x is searched at last/not. In this case, the number of comparisons T(n)= O log(n).
Average Case: If we are searching given key x for which neither a Minimum (best case) nor a maximum (worst case) steps are required is called the average case. In this case, we definitely get the given key x in the array A. In this case, also C(n) = O(log n).
![]()
0 Comments