# Introduction to Algorithms

An algorithm is a step-by-step procedure for solving a problem in a finite amount of time. The procedure should be sequential and deterministic.

The following are the characterstics of the algorithm:

• Composed of a finite sequence of actions applied to an input set
• Has a unique initial action
• Each action has a unique successor action
• The sequence terminates with either a solution or a statement that the problem is insoluble

Algorithms Analysis

Through the analysis of an algorithm, we try to find whether the given problem can be sloved by the provided algorithm and if it can solved, then how efficiently it can solve the problem in terms of time and space. This is crucial to differentiate the algorithms as some of them look similiar but they are different and some of them look different but they are equivalent.

Computational Complexity analysis

It mainly focuses on theoritical study of time and space requirement of algorithms.

Time complexity is the amount of work done by an algorithm
which is roughly proportional to the critical (inherently important) operations.

Space compelxity is amount of memory required by an algorithm to produce the result.

Pseudocode example

It is a high-level description of an algorithm and is more structured than English prose. However, it is less detailed than a program. It is more prefrerred  notation for describing algorithms as it hides program design issues.

Exampel: pseudocode for finding the maximum element of the given array

```Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
currentMax <- A[0]
for i <- 1 to n - 1 do
if A[i] > currentMax then
currentMax <- A[i]
return currentMax```

Counting Primitive Operations

In order to estimate the running time of an algorithm, we need to count the primitive operations ( addition, substration, division, multiply). It can be done by inspecting the pseudocode and determine the maximum number of primitive operations executed by an algorithm as a function of the input size.

Here is an example of counting primitive operations:

```Algorithm arrayMax(A, n)                # operations
currentMax <- A[0]                          2
for i <- 1 to n - 1 do                      1 + n
if A[i] > currentMax then               2(n - 1)
currentMax <- A[i]                  2(n - 1)
{ increment counter i (add & assign) } 2(n - 1)
return currentMax                               1
Total 7n - 2```

From above, we see that algorithm arrayMax executes 7n – 2 primitive operations in the worst case.

Hence, the running time T(n) is bounded by two linear functions as it can be expressed as:

a (7n – 2) <= T(n) <= b(7n – 2)

where, a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation

Big Oh notation

Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants c and n0 such that
f(n) <= cg(n) for n >= n0

Example:

1. 7n-2 is O(n)
need c > 0 and n0 >= 1 such that 7n-2 <= c•n for n >= n0
this is true for c = 7 and n0 = 1
2. 3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 >= 1 such that 3n3 + 20n2 + 5 >= c•n3 for n >= n0
this is true for c = 4 and n0 = 21

Now, instead of Counting Primitive Operations precisely like in arrayMax algorithm example, using Big-oh Notation.

```Algorithm arrayMax(A, n)                        # operations
currentMax <- A[0]                              O(1)
for i <- 1 to n - 1 do                          O(n)
if A[i] > currentMax then                   O(n)
currentMax <- A[i]                      O(n)
{ increment counter i (add & assign) }  O(n)
return currentMax                               O(1)
Total   O(n)```

From above analysis, we conclude that algorithm is of order O(n).