An algorithm is a stepbystep procedure for solving a problem in a finite amount of time. The procedure should be sequential and deterministic.
The following are the characterstics of the algorithm:
 Composed of a finite sequence of actions applied to an input set
 Has a unique initial action
 Each action has a unique successor action
 The sequence terminates with either a solution or a statement that the problem is insoluble
Algorithms Analysis
Through the analysis of an algorithm, we try to find whether the given problem can be sloved by the provided algorithm and if it can solved, then how efficiently it can solve the problem in terms of time and space. This is crucial to differentiate the algorithms as some of them look similiar but they are different and some of them look different but they are equivalent.
Computational Complexity analysis
It mainly focuses on theoritical study of time and space requirement of algorithms.
Time complexity is the amount of work done by an algorithm
which is roughly proportional to the critical (inherently important) operations.
Space compelxity is amount of memory required by an algorithm to produce the result.
Pseudocode example
It is a highlevel description of an algorithm and is more structured than English prose. However, it is less detailed than a program. It is more prefrerred notation for describing algorithms as it hides program design issues.
Exampel: pseudocode for finding the maximum element of the given array
Algorithm arrayMax(A, n) Input array A of n integers Output maximum element of A currentMax < A[0] for i < 1 to n  1 do if A[i] > currentMax then currentMax < A[i] return currentMax
Counting Primitive Operations
In order to estimate the running time of an algorithm, we need to count the primitive operations ( addition, substration, division, multiply). It can be done by inspecting the pseudocode and determine the maximum number of primitive operations executed by an algorithm as a function of the input size.
Here is an example of counting primitive operations:
Algorithm arrayMax(A, n) # operations currentMax < A[0] 2 for i < 1 to n  1 do 1 + n if A[i] > currentMax then 2(n  1) currentMax < A[i] 2(n  1) { increment counter i (add & assign) } 2(n  1) return currentMax 1 Total 7n  2
From above, we see that algorithm arrayMax executes 7n – 2 primitive operations in the worst case.
Hence, the running time T(n) is bounded by two linear functions as it can be expressed as:
a (7n – 2) <= T(n) <= b(7n – 2)
where, a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
Big Oh notation
Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants c and n_{0} such that
f(n) <= cg(n) for n >= n_{0}
Example:

7n2 is O(n)
need c > 0 and n0 >= 1 such that 7n2 <= c•n for n >= n0
this is true for c = 7 and n0 = 1 
3n^{3} + 20n^{2} + 5 is O(n^{3})
need c > 0 and n_{0} >= 1 such that 3n^{3} + 20n^{2} + 5 >= c•n^{3} for n >= n_{0}
this is true for c = 4 and n_{0} = 21
Now, instead of Counting Primitive Operations precisely like in arrayMax algorithm example, using Bigoh Notation.
Algorithm arrayMax(A, n) # operations currentMax < A[0] O(1) for i < 1 to n  1 do O(n) if A[i] > currentMax then O(n) currentMax < A[i] O(n) { increment counter i (add & assign) } O(n) return currentMax O(1) Total O(n)
From above analysis, we conclude that algorithm is of order O(n).