Asymptotic Notations  

(1) Big – oh (O) /(Big-O) 

After analysing an algorithm  if one says
T(n) = O(n 2)  
He means that algorithm will be completed with in cn  time for a sufficiently large n.
Hence big – O given upper bound but this upper bound may or may not be the tightest.                 
                         iff there are positive constant c & n0 then
             O(g(n)) = ≤ f(n) ≤c.g(n) ≤f(n)    n0                                                                upper bound 
                                    ↪ if f(n)=O(g(n)), we say that g(n)
                                                  is an  upper bond of f(n).
Hence
If T = O(n 2) then                        
T = O(n 3)
T = O(n 4)
infact T = O(n 2+r +)                  
[but it does NOT mean O(n 2) = O(n 3) = O(n 4) …]
But first statement is more clearing the fact.
keep in mind     n = O(n 3)
                                one sided equality

 O(g(n)) = ≤ f(n) ≤c.g(n) ≤f(n)    n0







(2) small– oh (O) /  (small-O) 

In small-o notation we get following informations:
1.Big –O notation is like ≤f(n) ≤ g(n) when f(n) = O(g(n))

2. small – o notation is like <f(n) < g(n) when f(n) = o(g(n))

3.The asymptotic upper bound provided by O – notation may or may not be asymptotically tight. Small – o is used to denote an upper bound that is NOT asymptotically tight.

4.o(g(n)) = (f(n): for any positive constant c > 0, n0 > 0, 
  ↪  0 ≤ f(n) < cg(n) ∀ n>n0 )
Example: 2n = {f(n)}
          but   2n 2 o(n 2)

(3)  (Big-omega) (𝛀) 

In Big-omega notation we get following informations:
1. Ω notation is used to provide an asymptotic lower bound.
2. we say f(n) is big-omega of g(n)
  {f(n)=𝛀(g(n))}  iff  there is positive constant c > 0, n0 > 0
such that
0 ≤ c.g(n) ≤ f(n), for all n ≥n0

3.If f(n)=𝛀(g(n))  then we call that g(n) is a lower bound of f(n).

Points to learn:
If P(n)denotes the running time of the insertion sort on input – size n, then 
we can say that P(n) = Ω(n) as insertion sort takes linear time in best case. 
Remember in practice we use O and Ω notations to tight upper and lower bounds respectively otherwise for any function P(n)both the following statements are correct.
P(n) = Ω(0) 
P(n) = O(∞)
Which means any algorithm will take at least zero time and at most infinity time to execute.

(4)  (Big-theta) (𝜣

In Big-theta notation we get following informations:
1. 𝜣 notation is used to provide an asymptotic lower bound.
2. we say f(n) is big-theta of g(n)
  {f(n)=𝜣(g(n))}  iff  there is positive constant c1,c2 & n0 
such that
0 ≤ c1.g(n) ≤ f(n)≤c2 .g(n) for all n ≥n0


3. f(n)=𝜣(g(n))   if only if
↪   f(n)=O(g(n)) 
       f(n)=Ω(g(n))
       f(n)=𝜣(g(n))  
then we call that g(n) is a Tight bound of f(n).






6.Small omega(Ω)

1.omega –nation is used to 4 a lower bound that is not         asymptotically tight. 
2.thus ω(g(n))  = {f(n)}for all positive constant n0 > 0 
such that
0 ≤ Cg(n) < f(n) for all n ≥n0 
3.f(n) 𝜖 𝞨(g(n))  
     ↪g(n) 𝜖 o(f(n)


12 Comments

Post a Comment

Previous Post Next Post