# Advanced Algorithms (6311) Gautam Das - PowerPoint PPT Presentation

The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
Title:

## Advanced Algorithms (6311) Gautam Das

Description:

### ... the trials are repeated more, the chances of getting very accurate results are more. ... trials are taken, = n/2 increaese, hence we get accurate results ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 12
Provided by: rangan9
Category:
Tags:
Transcript and Presenter's Notes

Title: Advanced Algorithms (6311) Gautam Das

1
• Notes 04/28/2009
• Ranganath M R.

2
Outline
• Tail Inequalities
• Markov distribution
• Chebyshevs Inequality
• Chernoff Bounds

3
Tail Inequalities
• Markov Distribution says that the probability of
X being greater that t is
• p(Xgtt) lt µ/t

4
• Chebyshevs inequality
• P(X-µ gt t.s) 1/t2

5
Chernoff bounds
• This is a specific distribution where we can
obtain much sharper tail inequalities
(exponentially sharp). If the trials are repeated
more, the chances of getting very accurate
results are more. Lets see how this is possible.
• Example imagine we have n coins (X1Xn ).
• and let the probabilities of each coins (say
• Now the Randon variable X ?ni1 xi
• and µ EX ?ni1 pi in general.

6
• Some special cases
• All coins are equally unbiased i.e. pi ½.
• µ n pi n1/2 n/2, s vn/2, Example for n
100, s v100/2 5

7
• Chernoff bounds is given by
• If EX gt µ
• P(X- µ µ) e /(1 )1 µ
-----------------eqn 1
• If µ gt EX
• P(µ - X µ) e µ2 /2
• Here the µ is the power of right hand side
expression. Hence If more trials are taken, µ
n/2 increaese, hence we get accurate results as
the expression (e /(1 )1 ) would be less
than 1.
• Example problem to illustrate this.
• Probability of a team winning is 1/3 .
• What is the probability that the team will win 50
out of the 100 games

8
• µ n pi 100 1/3 100/3
• s (no of games to win - µ)/ µ
• (50 100/3)/(100/3) ½.
• Now to calculating probability of winning we need
to substitute all these in eqn 1.
• e ½ /(3/2 3/2 )100/3 0.027 (approx).
• Here if we increase no of games(in general no of
trials), the µ increases, and the expression e
/(1 )1 evaluates to less than 1. hence we
get more accurate results, when more trails are
done.

9
Derivation
• Let X ?ni1 xi
• Let Y etX
• P(X- µ µ) P(X (1 ) µ )
• P(Y et (1 ) µ) EY/ et
(1 ) µ
• Now EY EetX e tX1 tX2 tXn
• EetX1 EetX2 EetXn

10
• Now lets consider EetXi
• Xi is either 0 or 1
• Xi will be 0 with probability 1-Pi
• and 1 with probability Pi.

11
• P(et) (1 - Pi)(1) if x 1 then y et
if x 0 then y 1
• to be continued in next class