script

Saturday, December 29, 2012

5.3-6


5.3-6 Explain how to implement the algorithm PERMUTE-BY-SORTING to handle the case in which two or more priorities are identical. That is, your algorithm should produce a uniform random permutation, even if two or more priorities are identical.

PERMUTE-BY-SORTING(A)
1 n = A.length
2 let P[1..n] be a new array
3 for i = 1 to n
4 P[i] = RANDOM(1, $n^3$)+ i
5 sortA, using P as sort keys
I made simple change. I am adding i in step 4.I changed the original algorithm form P[i] = RANDOM(1, $n^3$) to P[i] = RANDOM(1, $n^3$)+i. Lets say we get all equal priorities. But i is distinct it would make each of priorities distinct.

Ex:
n = 3
i =1
Let RANDOM(1, 3^3) = 1
P[1] = RANDOM(1, $n^3$)+ 1
P[1] = 2;


i = 2
Let RANDOM(1, 3^3) = 1
P[1] = RANDOM(1, $n^3$)+ 2
P[1] = 3;


i = 3
Let RANDOM(1, 3^3) = 1
P[1] = RANDOM(1, $n^3$)+ 3
P[1] = 4;
It's a small change but it would create distinct priorities. This doesn't feel right. please check again.Still we can get same priorities when i = 1 and random returns 2 and i = 2 random returns 1.Rework is needed.

Thursday, December 27, 2012

5.3-5


Prove that in the array P in procedure PERMUTE-BY-SORTING, the probability that all elements are unique is at least 1 - 1/n.

The algorithm is as below
PERMUTE-BY-SORTING(A)
1 n = A.length
2 letP(1...n) be a new array
3 for i = 1 to n
4 P [i] = $RANDOM(1, n^3)$
5 sortA, using P as sort keys
For P[] to unique lets go through one element at a time. For the first since no element is there in the Array P the probability is 1 bcoz it is the first element.The probability of picking the first element $1/n^3$ since every element has a fair chance in a set of $n^3$
For the second element we have to make sure we don't pick the first element again.
for example consider the tossing of a coin the probability of not getting a tail is 1 -(1/2)(probability of getting a head).
Similarly the probability of not getting the first element when we pick the first element is $1 - (1/n^3)$ (we might tempted in doing 1/n^3-1 but this is wrong since we are not selecting element from a smaller set of $1..n^3$ all elements will be there)
Similarly the probability of not getting the first and second element when we pick the third element is $1 - (1/n^3+1/n^3)$
Let us Indicator random variables $E_i$ be the event in which we pick unique prioroty for element i and E the indicator random variable for all elements of array p $$ P\{E\}=P\{E_1 \cap E_2 \cap .. \cap E_n\} = P\{E_2|E_1\} P\{E_1\} P\{E_3|E_2 \cap E_1\} P\{E_2 \cap E_1\} .... P\{E_n|E_n-1 \cap E_n-2... \cap E_1\} p\{E_n \cap E_n-1.... \cap E_1\}$$ $$ P\{E\} = 1 . 1 - (1/n^3) . 1 - (1/n^3+1/n^3) ... 1 - (n-1/n^3) $$
the last term has to be n - 1 because we would want substract the probability of not getting n-1 terms while picking the last element.
$$ P\{E\} = 1 . (n^3 - 1)/n^3 . (n^3 - 2)/n^3 ... (n^3 - n+1)/n^3 $$ $$ > 1 . (n^3 - n)/n^3 . (n^3 - n)/n^3 ... (n^3 - n)/n^3 $$ $$ > (1 - 1/n^2) . (1 - 1/n^2) . (1 - 1/n^2) ... (1 - 1/n^2) $$ even $1 > (1 - 1/n^2)$ $$ > (1 - 1/n^2)^n $$ The above equation look in the form of $(1+x)^n$ the binomial theorem has an expansion see Wikipedia or any other source. $$(1+x)^n ={n \choose 0} x^0 + {n \choose 1} x^1+{n \choose 2} x^2+....+{n \choose n} x^n$$ $$ (1 - 1/n^2)^n = 1 - 1/n+(n-1/2n^2)-(n-1/6n^2)+...$$ $$ > 1-1/n$$ even for odd and even powers of n

5.3-3

Suppose that instead of swapping element A[i] with a random element from the subarray A[i....n] , we swapped it with a random element from anywhere in the array:


PERMUTE-WITH-ALL(A)
1 n = A:length
2 for i = 1 to n
3 swap A[i] with A[1...n]
Does this code produce a uniform random permutation? Why or why not?


The above code will not produce uniform random permutation. The above for loop increments i but the call to random take 1 to n as arguments. So the probability for each no from 1 .. n is fair. This is just like tossing a coin twice where the probability is fair for H an T. The probability is 1/4 for each tossing since you can get (HT,TH,HH,TT) with 1/4 probability which $1/2^2$.
Ex: consider n = 3 A[0] = 1 ,A[2] = 2,A[3] = 3 then the possible no of permutations 3^3 from Random(1,2)(since it is called from PERMUTE-WITH-ALL(A) thrice).
permutations:
1 2 3
1 3 2
2 1 3
2 3 1
3 1 2
3 2 1

Similarly probability of picking a random number from n numbers n times is $1/n^n$. But for a uniform probability distribution according lemma 5.4 it should be $1/n!$ if PERMUTE-WITH-ALL did produce a uniform random permutation, then each permutation would occur 1/6 of the time. That would mean that each permutation would have to occur an integer number m times(bcoz 1/27 is not equal 1/6 so there might be an integer m), where m/27 = 1/6. No integer m satisfies this condition.

Wednesday, December 26, 2012

5.2–3

5.2–3 Use indicator random variables to compute the expected value of the sum of n dice. Let $X1,X2,X3,X4,X5,X6$ be the indicator variable that capture the no of 1,2,3,4,5,6 on a dice. Let X the indicator variable that denotes the sum when we throw the dice n times or throw n dice. Expected value is the Mean of the random variable so $$E[X] = 1 E[X1]+ 2 E[X2]+3E[X3]+4E[X4]+5 E[X5]+6 E[X6]$$ the above equation can be written as $$E[X] = \sum_{i = 1}^6 i E[Xi] $$ the no of 1 side in n dice throws $E[X1] = \sum_{i = 1}^n 1/6 $ $$E[X1] = 1/6(1+1+1....1) $$ sum of n 1's is n $$E[X1] = n/6 $$ $$E[X] = \sum_{i = 1}^6 i E[Xi] $$ $$E[X] = \sum_{i = 1}^6 i n/6 $$ $$E[X] = n/6 \sum_{i = 1}^6 i $$ $$E[X] = n/6 (1+2+3+4+5+6) $$ $$E[X] = 21n/6 $$

Thursday, December 20, 2012

4.3-6


4.3-6 Show that the solution to T(n) = 2T(\lfloor (n/2) \rfloor + 17)+ n is O(n lg n). let $T(n) \leq c(n-34) lgn - n $ $$ T(n) = 2c(n/2 -34+17) lg(n/2+17) - 2n + n $$ $$ T(n) = c(n -34) lg(n/2+17) - n $$ $$ T(n) = c(n -34) lg(n/2+17) - n < c(n-34)lg n -n $$ the above eq is less than $ c(n-34)lg n -n $ becoz n/2+17 < n.

4.3-5


Show that $\theta(n lg n)$ is the solution to the “exact” recurrence (4.3) for merge sort.
equation 4.3 $$ T(n) = T(\lfloor (n/2) \rfloor)+ T(\lceil (n/2) \rceil)+ \theta(n) $$ Let $ T(n) \leq c n lg n $ $$ T(n) = c n/2 lg (n/2) + c n/2 lg(n/2) + n $$ $$ T(n) = c n lg (n/2) + n $$ $$ T(n) = c n lg (n/2) + n $$ $$ T(n) = c n lg n - c n + n $$ $$ T(n) = c n lg n -( c n - n )$$ the residual should be greater than zero $$cn - n \geq 0 $$ $$c \geq 1 $$ if we want prove the base case we can do T(2) and T(3) else we can add $c_2 n $ at the end of our assumption and prove for T(1)

4.3-4

Show that by making a different inductive hypothesis, we can overcome the difficulty with the boundary condition T(n) for recurrence (4.19) without adjusting the boundary conditions for the inductive proof. $$ T(n) = 2 T( \lfloor n/2 \rfloor) + n $$ let $ T(n)\leq c_1 n lgn +c_2 n $ $$ T(n) \leq 2 c_1 n/2 lg(n/2) + 2 c_2 n/2 $$ $$ T(n) \leq c_1 n lg(n/2) + c_2 n $$ $$ T(n) \leq c_1 n lg n - c_1 n + c_2 n $$ $$ T(n) \leq c_1 n lg n - ( c_1 n - c_2 n ) $$ $$ c_1 n - c_2 n \geq 0 $$ $$ c_1 \geq c_2 $$ for base case $ T(1)\leq c_1 1 lg1 +c_2 1 $ $ T(1)\leq c_2 1 $ if $c_2$ is sufficiently large the above case is proved for base case as well.

4.3-2

We saw that the solution of $ T(n) = 2T(\lfloor n/2\rfloor)+ n$ is O(n lg n). Show that the solution of this recurrence is also $ \Omega (n lg n)$. Conclude that the solution is $\theta(n lg n)$. I am not going to prove for O. I will prove for $\Omega$ let $T(n) \geq c n lg n$ $$ T(n) = 2 c n/2 lg(n/2)+n $$ $$ T(n) = n c lg(n/2)+n $$ $$ T(n) = n c lg(n) - nc +n $$ $$ T(n) = n c lg(n) - ( - nc +n ) $$ if above eq has to be greater than n lg n the residual (in brackets) should be $\leq 0$ $$ - nc +n \leq 0 $$ $$ - nc \leq -n $$ $$ c \leq 1 $$ base case: $$ T(2) = \theta(2) \leq c 2 lg 2 $$ $$ T(2) = \theta(2) \leq c 2 $$ for sufficiently large c above equation is true.

4.3-2

4.3-2 Show that the solution of $T(n) = T(\lceil n/2\rceil)+ 1 $ is $O(lg n)$. let $T(n) = c lgn $ $$T(n) = c lg [n/2]+1 $$ $$T(n) = c lg n - c lg2 +1 $$ $$T(n) = c lg n - ( c lg2 -1) $$ $$c lg2 -1 \geq 0 $$ $$c \geq 1 $$ $$T(2) = \theta(2) <= c $$ for big value of c except when n = 1 the above equation is true.

4.3-1


Show that the solution of T(n) = T(n - 1) + n is O(n2). If you try to prove $T(n) = cn^2$ you will end up with $$cn^2 - (-c+2nc-n)$$ to prove $(-c+2nc-n) \gt 0 $ will be difficult. But as you can see if you can eliminate c from above equation it will be easier to prove. let assume below $$T(n) = cn^2 - c$$ $$T(n) = cn^2 + c - 2nc +n -c $$ c will cancel out in above equation hence $$T(n) = cn^2 - (2nc-n)$$ $$2nc-n \geq 0 $$ $$2nc \geq n $$ $$c \geq 1/2 $$ $$ T(1) = \theta(1) \leq c $$ For base case if we take c should be sufficiently large like 100 the above eq will satisfy base case

3-5


3-5 Variations on O and $\Omega$
Some authors define $\Omega$ in a slightly different way than we do; let’s use $ \overset{\infty}{\Omega}$ (read “omega infinity”) for this alternative definition. We say that f(n) = O(g(n)) if there exists a positive constant c such that $f(n) \geq cg.n \geq 0$ for infinitely many integers n.

a. Show that for any two functions f(n) and g(n) that are asymptotically nonnegative, either f(n) = O(g(n)) or $f(n)= \overset{\infty}{\Omega}(g(n))$ or both, whereas this is not true if we use $\Omega$ in place of $\overset{\infty}{\Omega}$

b. Describe the potential advantages and disadvantages of using $\overset{\infty}{\Omega}$ instead of $\Omega$ to characterize the running times of programs.
Some authors also define O in a slightly different manner; let’s use $O^{'}$ for the alternative definition. We say that $f(n) = O^{'}(g(n))$ if and only if $|f(n)| = O(g(n))$.

c. What happens to each direction of the “if and only if” in Theorem 3.1 if we substitute $O^{'}$ for O but still use $\Omega$ ?

Some authors define $\tilde{O}$ (read “soft-oh”) to mean O with logarithmic factors ignored: $\tilde{O}(g(n)) = \{ f(n): $ there exist positive constants c, k, and n0 such that

                   $0 \leq f(n) \leq c g(n) \lg^{k}(n) $for all $n \geq n0$ }.

d. Define $\tilde{\Omega}$ and $\tilde{\theta}$ in a similar manner. Prove the corresponding analog to Theorem 3.1.


Solution:
a. Lets define f(n) = O(g(n)) $0 \leq f(n) \leq c g(n)$ where c is a positive constant for all $n \geq n_0$. Now if we see $f(n)= \overset{\infty}{\Omega}(g(n))$ this does'nt define $n \geq n_0$ it considers infinite numbers.

b. The advantage is \overset{\infty}{\Omega} defines lower bound for every possible numbers of elements as input.

c. The direction is not affected because $f(n) = O^{'}(g(n))$ if and only if $|f(n)| = O(g(n))$. this means $ 0 \geq |f(n)| \geq c g(n)$ since we cannot substitue negative value for n(bcoz we cannot negative no of elements as input to algorithm). The above equation can be written as $ 0 \geq f(n) \geq c g(n)$ which is similar to $f(n) = O(g(n))$ d. $\tilde{O}(g(n)) = \{ f(n): $ there exist positive constants c, k, and n0 such that

                   $0 \leq f(n) \leq c g(n) \lg^{k}(n) $for all $n \geq n0$ }.

$\tilde{\Omega}(g(n) = \{ f(n): $ there exist positive constants c, k, and n0 such that

                   $0 \leq c g(n) \lg^{k}(n)\leq f(n) $for all $n \geq n0$ }.

$\tilde{\theta}(g(n)) = \{ f(n): $ there exist positive constants $c_1$,$c_2$, k, and n0 such that

                   $0 \leq c_1 g(n) \lg^{k}(n)\leq f(n) \leq c_2 g(n) \lg^{k}(n) $for all $n \geq n0$ }.

since $\tilde{\theta}(g(n)) = \{ f(n)$ is $0 \leq c_1 g(n) \lg^{k}(n)\leq f(n) \leq c_2 g(n) \lg^{k}(n)$ separating the in equalities $$\begin{equation}0 \leq c_1 g(n) \lg^{k}(n)\leq f(n) \end{equation}$$
$$\begin{equation} 0 \leq f(n) \leq c_2 g(n) \lg^{k}(n) \end{equation}$$ equation $(1)$ is $\tilde{\Omega}(g(n) = f(n) $ and equation $(2)$ is $\tilde{O}(g(n)) = f(n) $
$\tilde{\theta}(g(n)) = f(n)$ if and only if $\tilde{\Omega}(g(n) = f(n) $ and $\tilde{O}(g(n)) = f(n) $ hence the theorem is proved.

4.1-1


What does FIND-MAXIMUM-SUBARRAY return when all elements of A are negative? It returns the greatest negative number and its index.

Sunday, December 9, 2012

3-3


Ordering by asymptotic growth rates
a.Rank the following functions by order of growth; that is, find an arrangement g1, g2, .....g30 of the functions satisfying $g1 = \Omega(g2)$, $g2 = \Omega(g3)$ . . . ,$g29 = \Omega(g30)$. Partition your list into equivalence classes such that functions f(n) and g(n) are in the same class if and only if $f(n) = \theta(g(n))$.

$\lg\lg(n)$ $2^{\lg^*n}$ ${(\sqrt2)}^{\lg(n)}$ $n^2$ $n!$ $(\lg(n))!$
$(\frac{3}{2})^n$ $n^3$ $\lg^2(n)$ $\lg(n!)$ $2^{2^{n}}$ $n^{ \frac{1}{\lg(n)}}$
$\ln{\ln(n)}$ $\lg^*n$ $n . 2^n$ $n^{\lg{\lg(n)}}$ $ln(n)$ 1
$2^{\lg(n)}$ $(\lg n)^{\lg n}$ $e^n$ $4^{lg n}$ $(n+1)!$ $\sqrt {lg n}$
$\lg^*(\lg n)$ $2 ^ \sqrt{2 lg n}$ $n$ $2^n$ $n \lg n$ $2 ^{2^{n+1}}$

b.Give an example of a single nonnegative function f .n/ such that for all functions g(n) in part (a), f(n) is neither $O(g_i(n))$ nor $\Omega(g_i(n))$.
Solution:
a. If functions are on same they realted by $\theta$ else thoose higher will be $\Omega$ functions below The solution will easier if it is calculated from bottom to top. $$2^{2^{n+1}}$$ $$2^{2^{n}}$$ $$(n + 1)!$$ $$n!$$ see equation 14 $$en$$ see equation 12 $$ n · 2^n $$ $$2^n$$ $$(3/2)^n$$ $$(\lg n)^{\lg n} = n ^ {\lg \lg n}$$ see equation11 $$(\lg n)! $$ see equation 10 $$n3$$ $$n^2 = 4^{\lg n}$$ see equation 9 $$n \lg n and \lg(n!)$$ see equation 8 $$n = 2^{lg n}$$ see equation 7 $$(\sqrt 2) ^ {lg n} = \sqrt n $$ see equation 6 and 15 $$2 ^ {\sqrt{2 lgn}} $$ see equation 4 $$\lg^2 n$$ $$\ln n$$ $$\sqrt \lg n$$ $$\ln \ln n $$ see equation 3 $$ 2 ^ {\lg^∗ n} $$ $$ \lg^∗ n and \lg^∗(lg n)$$ see equation 2 $$ \lg(\lg^∗)n$$ $$n ^ {1/ lg n}(= 2) and 1 $$ see equation 1
Below rules are useful in ranking
 • Exponential functions grow faster than polynomial functions, which grow faster than polylogarithmic functions.
• The base of a logarithm doesn’t matter asymptotically, but the base of an exponential and the degree of a polynomial do matter.
b. The following f (n) is nonnegative, and for all functions $g_i (n)$ in part (a), f(n) is neither $O(g_i (n))$ nor $ \Omega(gi (n))$.
f (n) = $2^{2^{n+2}}$ if n is even ,
0 if n is odd . The above function will be greater than $2^{2^{n+1}}$ if n is even and 0 if n is odd.
1 is a constant $$2^{\lg(n)} = n $$ since logarithm base is 2 and it is raised to power 2 raising both sides to $\frac{1}{\lg n}$ $$ \begin{equation} 2 = n^{\frac{1}{\lg n}} \end{equation}$$ since this is equal to 2 which is also a constant $$2 = n^{\frac{1}{\lg n}} = \theta(1) $$ The iterated logarithm is slow growing function if you see the definition of iterated algorithm the value is very for no of atoms in the universe.If you apply a log on iterated logarithm it will be even lower. $$\lg (\lg^{*}n)$$ From iterated algorithm definition $$\lg^{*}n = \lg^{*}{\lg n}+1$$ we can easily prove this as the definition suggests iterated algorithm is no of times the logarithm must be so the result is <= 1. By induction we can prove above equation. ex: $$\lg^{*}2 = 1$$ $$\lg^{*}2 = 1 + \lg^{*}{\lg 2} $$ $$\lg^{*}2 = 1 + \lg^{*}{1} $$ since $\lg^{*}{1}$ is zero bcoz we dont have to apply iterated logarithm since result is already 1 $$\lg^{*}2 = 1 $$ $$\lg^{*}{\lg n} = \lg^{*}n-1$$ hence $$\begin{equation}\lg^{*}{\lg n} = \theta(\lg^{*}n)\end{equation}$$ $$\begin{equation}\ln \ln n = \omega(2 ^ {lg ^{∗} n})\end{equation}$$ $$\begin{equation}2^{\sqrt{2 lg n}} = \omega(lg^2 n) \end{equation}$$ Proof for equation 4 we have prove below equation result is infinity (according to definition which means numerator is upper bound for large value of n hence we need to prove result is infinity ) $$ \lim_{n\to\infty} \frac{2^{\sqrt{2 lg n}}} {lg^2 n}$$ When an equation is in indeterminate form we can apply derivative to convert to determinate form (derivative is rate of change of function) Applying derivative (involves chain rule becoz of complex functions) derivative of $2^{\sqrt{2 lg n}}$ is $\frac{2^{\sqrt{2 lg n} } . ln 2}{n. \sqrt{2 lg n}}$ (tip: see derivative of 2^x applying lg on sides) applying derivative for 3 times we finally get $$ \lim_{n\to\infty} \frac{ \ 2^{\sqrt{2 lg n} } {ln 2}^3} {3}$$ The above equation will be infinity when n tends to infinity.
Applying log on both sides of eq 4 $$ \sqrt{2 lg n} = 2 lg lg n $$ $$\begin{equation} \sqrt{2 lg n} = \omega(2 lg lg n) \end{equation} $$ $$\begin{equation} (\sqrt{2}) ^ {lg n} = \sqrt{n} \end{equation} $$ see log basics $$\begin{equation} 2 ^ {lg n} = n \end{equation} $$ $$\begin{equation} \lg (n!) = (n lg n) \end{equation} $$ Through Stirling's approximation 3.16 eq in the book $$\begin{equation} n^2 = 4 ^{lg n} \end{equation} $$ $$\begin{equation} (\lg n)! = \omega( n ^ 3) \end{equation} $$ proof for equation 9 applying logarithm for numerator and denominator $$ \lim_{n\to\infty} \frac{ (\lg \lg n)! } { 3 lg n } $$ applying Stirling approximation $$ \lim_{n\to\infty} \frac{ lg n . lg lg n } { 3 lg n } $$ $$ \lim_{n\to\infty} \frac{ lg lg n } { 3 } $$ The above equation will be infinity when n tends to infinity $$\begin{equation} (\lg n)^{\lg n} = n ^ {\lg lg n} \end{equation} $$ $$\begin{equation} e^{n} = \omega(n . 2^n) \end{equation} $$ proof for equation 12 $$ \lim_{n\to\infty} \frac {2^n (\frac{e}{2})^{n} } {n . 2^n} $$ $$ \lim_{n\to\infty} \frac {(\frac{e}{2})^{n} } {n } $$ Applying derivative to above eq $$ \lim_{n\to\infty} \frac {n (\frac{e}{2})^{n-1} } {n } $$ $$\lim_{n\to\infty} (\frac{e}{2})^{n-1} $$ the above equation will be infinity when n tends to infinity $$ \begin{equation} n! = \theta(n^{n+\frac{1}{2}} e^{−n}) \end{equation} $$ The above equation is obtained through Stirling's approximation by ignoring constants and lower order terms $$\begin{equation} n! = \omega{e^n} \end{equation} $$ proof for equation 14: $$\lim_{n\to\infty} \frac {n!} {e^n} $$ substitute from eq 13 for $n!$ $$\lim_{n\to\infty} \frac {n^{n+\frac{1}{2}} e^{−n}} {e^n} $$ $$\lim_{n\to\infty} \frac {n^{n+\frac{1}{2}}} {e^{2n}} $$ applying natural logarithm for numerator and denominator $$\lim_{n\to\infty} \frac {\ln {n^{n+\frac{1}{2}}}} {\ln e^{2n}} $$ $$\lim_{n\to\infty} \frac { {n+\frac{1}{2}} \ln {n}} {2n} $$ applying derivative for above eq $$\lim_{n\to\infty} \frac { {n+\frac{1}{2}} \ln {n}} {2n} $$ $$\lim_{n\to\infty} \frac { \ln {n} + \frac {1}{n} ({n+\frac{1}{2}}) } {2} $$ $$\lim_{n\to\infty} \frac { n \ln {n} + ({n+\frac{1}{2}}) } {2n} $$ applying derivative for above eq again $$\lim_{n\to\infty} \frac { \ln {n}+ n . \frac {1}{n} + 1 } {2} $$ $$\lim_{n\to\infty} \frac { \ln {n}+ 2} {2} $$ The above equation will be infinity when n tends to infinity $$\begin{equation} (\sqrt 2) ^{ lg n} = \omega( 2 ^{\sqrt{2 \lg n}}) \end{equation} $$

Saturday, December 1, 2012

3-2


Relative asymptotic growths Indicate, for each pair of expressions (A,B) in the table below, whether A is $O$, $o$, $\Omega$,$\omega$ or $\theta$ of B. Assume that $k \geq 1$,$\epsilon$ > 0, and c > 1 are constants. Your answer should be in the form of the table with “yes” or “no” written in each box.
A B $O$ $o$ $\Omega$ $\omega$ $\theta$
$lg^k{n}$ $n^\epsilon$ Yes Yes No No No
$n^k$ $c^n$ Yes Yes No No No
$\sqrt{n}$ $n^{\sin {n}}$ No No No No No
$2^n$ $2^{n/2}$ No No Yes Yes No
$n^{lgc}$ $c^{lgn}$ Yes No Yes No Yes
$lg{(n!)}$ $lg{(n^n)}$ Yes No Yes No Yes
a. If $f(n) = o(n)$ then by definition $\lim_{n \to \infty} \frac{f(n)}{g(n)}$ Since A is to expressed in the form B(see the question A as $O,0\theta $ etc). A is f(n) and B is g(n) $$\lim_{n \to \infty} \frac{\lg^k{n}}{n^\epsilon}$$ The above equation is in indeterminate form. lets apply L'Hôpital's Rule which will convert this in to determinate form.Applying differential to numerator and denominator. $$\lim_{n \to \infty} \frac{k (lg^{k-1}{n})}{ \epsilon (n^\epsilon-1) n^1}$$ $$\lim_{n \to \infty} \frac{k (lg^{k-1}{n})}{ \epsilon (n^\epsilon)}$$ IF we apply L'Hôpital's Rule k times for numerator and denominator $$\lim_{n \to \infty} \frac{k (k-1) (k-2)...1}{ \epsilon^k (n^\epsilon)}$$ As you can see the numerator is a constant value for real big values of n (n approaches infinity) the equation is true. hence $f(n) = o(g(n))$ and it is not $f(n) = \omega(g(n))$ Lets try to prove $f(n) = O(n)$ from defination $$0 \leq f(n) \leq c g(n) $$ $$0 \leq \lg^k{n} \leq c n^\epsilon $$ $$0 \leq \frac{\lg^k{n}} {n^\epsilon} \leq c $$ by L'Hôpital's Rule the fractional part goes to zero hence $f(n) = O(n)$ for $f(n) = \theta(g(n))$ it should be bounded by upper and lower bound since it is true for only. $f(n) is not \theta(g(n))$
 b. If $n^k = o(c^k)$ then $$\lim_{n \to \infty} \frac{n^k}{c^n}$$ lets apply L'Hôpital's Rule which will convert this in to determinate form.Applying differential to numerator and denominator. $$\lim_{n \to \infty} \frac{k n^{k-1}}{n c^{n-1}}$$ IF we apply L'Hôpital's Rule k times for numerator and denominator $$\lim_{n \to \infty} \frac{k (k-1)... 1 }{n^k c^{n-k}}$$ As you can see the numerator is a constant value for real big values of n (n approaches infinity) the equation is true. hence $f(n) = o(g(n))$ and it is not $f(n) = \omega(g(n))$
 c. $\sqrt{n}$ and $n^{\sin {n}}$ since sin graph varies between 1 and -1 $n^{\sin {n}}$ never settles for a smaller or bigger value. $\sqrt{n}$ is either greater than $n^{\sin {n}}$ or less than $n^{\sin {n}}$. Hence we cannot any of the relations.
 d. $2^n$ and $2^{n/2}$ $$\lim_{n \to \infty} \frac{2^n }{2^{n/2}}$$ $$\lim_{n \to \infty} 2^{n/2} $$ this equation approaches when n approaches infinity. hence $2^n = \omega{2^{n/2}}$ lets verify f(n) = $\Omega(g(n))$ $$0 \leq 2^n \leq c 2^{n/2}$$ $$0 \leq 2^{n/2} \leq c $$
 e. $n^{lgc}$ and $c^{lgn}$ both the equation are same. we can write $n^{lgc}$ as $c^{lg_c{n^{lgc}}} $ $$c^{lg_c{n}(lgc)} $$ $$c^{\frac{lgn(lgc)}{lgc}} $$ $$c^{lgn} $$ since both the equations are same all the equality relations are true i.e $O$ , $\theta$ , $\Omega$
 f. $lg{(n!)}$ and $lg{(n^n)}$
by Stirling's approximation $lg{(n!)}$ = $\theta(n lgn)$ which means both the equations are same hence all equality relations are true again.

Thursday, November 22, 2012

3-1


3-1 Asymptotic behavior of polynomials Let $$p(n) = \sum\limits_{i=0}^d a_i n^i$$ where $a_d \gt 0$, be a degree-d polynomial in n, and let k be a constant. Use the definitions of the asymptotic notations to prove the following properties.
 a. If $k \geq d$, then $p(n) = O(n^k) $
 b. If $k \leq d$, then $p(n) = \Omega(n^k)$.
 c. If $k = d$, then $p(n) = \theta(n^k)$.
 d. If $k \gt d$, then $p(n) = o(n^k)$.
 e. If $k \lt d$, then $p(n) = \omega(n^k)$.
solution:
 a. If $k \geq d$, then $p(n) = O(n^k)$ by definition of big-oh of n $0\leq f(n) \leq cg(n)$ we can write the functions as $0 \leq \sum\limits_{i=0}^d a_i n^i \leq c n^k$
divide the equation by $n^k$
$$0 \leq {{\sum\limits_{i=0}^d a_i n^i}\over n^k} \leq c $$ $$0 \leq \sum\limits_{i=0}^d a_i n^{i-k} \leq c $$ Now since $k \geq d$ then $i-k \leq 0$ and all $n^{i-k}$ will be less than 1 so if we choose $c = \sum\limits_{i=0}^d a_i $ then the equation will be true.
  b. If $k \leq d$, then $p(n) = \Omega(n^k)$. by definition of big-omega of n $0 \leq cg^n \leq f^n$ we can write the functions as $0 \leq c n^k \leq \sum\limits_{i=0}^d a_i n^i $
divide the equation by $n^k$
$$0 \leq c \leq {{\sum\limits_{i=0}^d a_i n^i} \over n^k} $$ $$0 \leq c \leq \sum\limits_{i=0}^d a_i n^{i-k} $$ Now since $k \leq d $ then $i-k \geq 0$ for large values of i $(i>k)$ and $n^{i-k}$ will be greater than 1 so if we choose $c = \sum\limits_{i=0}^d a_i $ then the equation will be true.
 c. If $k = d$, then $p(n) = \theta(n^k)$ by definition of $\theta(n)$ $0 \leq c_1 n^k \leq \sum\limits_{i=0}^d a_i n^i \leq c_2 n^k$ dividing equation by $n^k$ $$0 \leq c_1 \leq {{\sum\limits_{i=0}^d a_i n^i} \over n^k} \leq c_2 $$ $$0 \leq c_1 \leq \sum\limits_{i=0}^d a_i n^{i-k} \leq c_2 $$ lets get c1 and c2 values which satisfy the equation $$ c_1 \leq \sum\limits_{i=0}^d a_i n^{i-k}$$ since $k = d$ then the $i-k \leq 0 $ and $n^{i-k}$ will be less than 1
I find it easier to find values of c1 and c2 by taking examples(induction principle) ex: let k =1 and n = 1 since k = d then d = 1 $$ c_1 \leq \sum\limits_{i=0}^1 a_i n^{i-k}$$ $$ c_1 \leq a_0 n^{0-1}+ a_1 n^{1-1} $$ $$ c_1 \leq a_0 n^-1+ a_1 $$ if $c_1$ is a_1 (is $a_d$ in this example ) the above equation is always true if we do the same for k = 1,2 ... d $c_1 = a_d$ will prove the equation correct always Lets find c2 now $$c_2 \geq \sum\limits_{i=0}^d a_i n^{i-k}$$ ex: let k =1 and n = 1 since k = d then d = 1 $$c_2 \geq a_0 n^{0-1}+ a_1 n^{1-1} $$ $$ c_2 \geq a_0 n^-1+ a_1 $$ if $$ c_2 = a_0+a_1$$ the above equation is always true which is sum of coefficients of polynomial $$ c_2 = \sum\limits_{i=0}^1 a_i$$ if we do the same for k = 1,2 ... d $$ c_2 = \sum\limits_{i=0}^d a_i$$
Similarly we can prove other equation above.

Saturday, November 17, 2012

3.2-7


3.2-7 Prove by induction that the i th Fibonacci number satisfies the equality $$F_i = {\theta^i - \widehat{\theta}^i \over\sqrt5}$$ where $\theta$ is the golden ratio and $\widehat{\theta}$ is its conjugate. Solution: $$x^2 = x -1$$ Roots of the above equation are golden ratio and conjugate substituting roots in the equation $$\theta^2 = \theta+1$$ multiplying both sides of the equation with $\theta^{k}$ $$\theta^{k+2} = \theta^{k+1}+\theta^{k} $$ which is true for conjugate as well $$\widehat{\theta}^{k+2} = \widehat{\theta}^{k+1}+\widehat{\theta}^k $$ Lets suppose \begin{equation}\label{eq:1} F(k) = {\theta^k - \widehat{\theta}^k \over\sqrt5} \end{equation} and \begin{equation}\label{eq:2} F(k+1) = {\theta^{k+1} - \widehat{\theta}^{k+1} \over\sqrt5} \end{equation} factorial of number can be defined by the following equation $$F(n) = F(n-1)+F(n-2)$$ The factorial of a number $ F(k+2)$ is $$F(k+2) = F(k+1)+F(k)$$ substituting $F(k+1)$ and $F(k)$ from above assumptions (\ref{eq:1}) and (\ref{eq:2}) $$F(k+2) = {\theta^k - \widehat{\theta}^k \over\sqrt5} + {\theta^{k+1} - \widehat{\theta}^{k+1} \over\sqrt5} $$ $$F(k+2) = {\theta^k +\theta^{k+1} \over\sqrt5} - ({\widehat{\theta}^{k+1} +\widehat{\theta}^k \over\sqrt5}) $$ $$F(k+2) = {\theta^{k+2} \over\sqrt5} - ({\widehat{\theta}^{k+2}\over\sqrt5}) $$ Thus, by the mathematical induction principle, we conclude that $$F_i = {\theta^i - \widehat{\theta}^i \over\sqrt5}$$