Approximate List-Decoding and Uniform Hardness Amplification - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

Approximate List-Decoding and Uniform Hardness Amplification

Description:

Advice efficient XOR Lemma. g not necessarily ? NP but g ? PNP ... e = e-O(dk) LEARN' w.p. poly( ) No advice!!! e = poly(1/k), d = O(k-0.1) ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 36
Provided by: RJ3
Learn more at: https://cseweb.ucsd.edu
Category:

less

Transcript and Presenter's Notes

Title: Approximate List-Decoding and Uniform Hardness Amplification


1
Approximate List-Decoding and Uniform Hardness
Amplification
  • Russell Impagliazzo (UCSD)
  • Ragesh Jaiswal (UCSD)
  • Valentine Kabanets (SFU)

2
Hardness Amplification
f
F
Hard function
Harder function
  • Given a hard function we can get an even harder
    function

3
Hardness
0, 1n
0, 1n
s
f
d.2n
  • A function f is called d-hard for circuits of
    size s (Algorithm with running time t), if any
    circuit of size s (Algorithm with running time t)
    makes mistake in predicting the function on at
    least d fraction of the inputs

4
XOR Lemma
0, 1nk
0, 1n
f
f?k
f
f
f
0/1
k
XOR
0/1
f?k0, 1nk 0, 1 f?k(x1,, xk)
f(x1) ? ? f(xk)
  • XOR Lemma If f is d-hard for size s circuits,
    then f?k is
  • (1/2 - e)-hard for size s circuits (e
    e-O(dk), s spoly(d, e))

5
XOR Lemma Proof Ideal case
C?
(which computes f?k for at least (½ e) fraction
of inputs)
A
whp
C (which computes f for at least (1 - d)
fraction of inputs)
6
XOR Lemma Proof
A lesser nonuniform reduction
C?
(which computes f?k for at least (½ e) fraction
of inputs)
A
Advice (Advicepoly(1/e))
whp
C1
Cl
C (which computes f for at least (1 - d)
fraction of inputs)
One of them computes f for at least (1 - d)
fraction of inputs
l 2Advice 2poly(1/e)
7
Optimal List Size
  • Question What is the reduction in the list size
    we should target?
  • A good combinatorial answer using error
    correcting codes

C?
A
whp
C1
Cl
8
XOR-based Code T03
  • Think of a binary message msg on M2n bits as a
    truth-table of a Boolean function f.
  • The code of msg is of length Mk where
    code(x1,,xk) f(x1) ? ? f(xk)

x (x n)
msg
f(x)
x (x1, , xk)
code
f(x1) ? ? f(xk)
9
List Decoder
(1/2 ?)
m
c
w
XOR Encoding
Decoding
channel
m1,,ml
(1 - d)
  • Decoder
  • Local
  • Approximate
  • List

Information theoretically l should be O(1/?2)
10
The List Size
  • The proof of Yaos XOR Lemma yields an
  • approximate local list-decoding algorithm for
  • the XOR-code defined above
  • But the list size is 2poly(1/?) rather than
    the
  • optimal poly(1/?)
  • Goal Match the information theoretic bound
  • on list-decoding i.e. get advice of length
  • log(1/?)

11
The Main Result
12
The Main Result
C? ((½ e)-computes f?k)
A
Advice(Advice log(1/e))
whp
C ((1 - d)-computes f)
  • e poly(1/k), d O(k-0.1)
  • Running time of A and size of C is at most
    poly(C?, 1/e)

13
The Main Result
C? ((½ e)-computes f?k)
A
w.p. poly(e)
C ((1 - d)-computes f)
  • e poly(1/k), d O(k-0.1)
  • Running time of A and size of C is at most
    poly(C?, 1/e)

14
The Main Result
C?((½ e)-computes f?k)
Advice(Advice log(1/e))
A
A
whp
w.p. poly(e)
C ((1 - d)-computes f)
Cl
C1
l poly(1/e)
At least one of them (1 - ?)-computes f
Advice efficient XOR Lemma
  • We get a list size of poly(1/e)
  • which is optimal but
  • e is large e poly(1/k)

15
Uniform Hardness Amplification
16
Uniform Hardness Amplification
  • What we want

f hard wrt BPP
g harder wrt BPP
  • What we get

Advice efficient XOR Lemma
f hard wrt BPP/log
g harder wrt BPP
17
Uniform Hardness Amplification
  • What we can do

BDCGL92
f ? NP hard wrt BPP
f ? NP hard wrt BPP/log
Advice efficient XOR Lemma
Simple average-case reduction
g ? PNP harder wrt BPP
g ? ?? harder wrt BPP
h ? PNP hard wrt BPP
1/nc
½ - 1/nd
  • g not necessarily ? NP but g ? PNP
  • PNP poly-time TM which can make polynomially
    many
  • parallel Oracle queries to an NP oracle

Trevisan gives a weaker reduction (from 1/nc to
(1/2 1/(log n)a) hardness) but within NP.
18
Techniques
19
Techniques
  • Advice efficient Direct Product Theorem
  • A Sampling Lemma
  • Learning without Advice
  • Self-generated advice
  • Fault tolerant learning using faulty advice

20
Direct Product Theorem
0, 1nk
0, 1n
f
fk
f
f
f
0/1
k
concatenation
fk0, 1nk 0, 1k fk(x1,, xk)
f(x1) f(xk)
0, 1k
  • Direct Product Theorem If f is dhard for size
    s circuits, then fk is
  • (1 - e)-hard for size s circuits (e
    e-O(dk), s spoly(d, e))
  • Goldreich-Levin Theorem XOR Lemma and Direct
    Product Theorem
  • are saying the same thing

21
XOR Lemma from Direct Product Theorem
C? ((½ e)-computes f?k)
  • Using Goldreich-Levin Theorem

A1
whp
CDP (poly(e)-computes fk)
A2
w.p. poly(e)
C ((1 - d)-computes f)
  • e poly(1/k), d O(k-0.1)

22
LEARN from IW97
CDP (?-computes fk)
LEARN IW97
Advice n/?2 pairs of (x, f(x)) for independent
uniform xs
whp
C ((1 - d)-computes f)
  • e e-O(dk)

23
Goal
  • We want to eliminate the advice (or the (x,
    f(x)) pairs).
  • In exchange we are ready to make some
    compromise on
  • the success probability of the randomized
    algorithm

CDP (?-computes fk)
LEARN IW97
Advice n/?2 pairs of (x, f(x)) for independent
uniform xs
LEARN
whp
w.p. poly(?)
No advice!!!
C ((1 - d)-computes f)
  • e e-O(dk)
  • e poly(1/k), d O(k-0.1)

24
Self-generated advice
25
Imperfect samples
  • We want to use the circuit CDP to generate n/?2
    pairs (x, f(x)) for independent uniform xs
  • We will settle for n/?2 pairs (x,bx)
  • The distribution on xs is statistically close
    to uniform and
  • for most xs we have bx f(x).
  • Then run a fault-tolerant version of LEARN on CDP
    and the generated pairs (x,bx)

26
How to generate imperfect samples
27
A Sampling Lemma
xk
x1
x2
x3
2nk
  • D is a Uniform Distribution

nk
28
A Sampling Lemma
xk
x1
x2
x3
  • G gt ? 2nk
  • Stat-Dist(D, U) lt ((log 1/?)/k)1/2

G
nk
29
Getting Imperfect Samples
  • G subset of inputs on which CDP(x) fk(x)
  • G gt ? 2nk
  • Pick a random k-tuple x, then pick a random
    subtuple x of size k1/2
  • With probability ?, x lands in the good set G
  • Conditioned on this, the Sampling Lemma says that
    x is close to being uniformly distributed
  • If k1/2 gt the number of samples required by
    LEARN, then done!
  • Else

30
Direct Product Amplification
  • CDP CDP which poly(e)-computes fk
  • where (k)1/2 gt n/e2
  • ??
  • CDP CDP such that for at least poly(e)
    fraction of k-tuples, x
  • CDP(x) and fk(x) agree on most bits

31
Putting Everything Together
32
CDP for fk
CDP for fk
DP Amplification
Sampling
pairs (x,bx)
Fault tolerant LEARN
with probability gt poly(?)
circuit C (1-?)-computes f
Repeat poly(1/?) times to get a list containing a
good circuit for f, w.h.p.
33
Open Questions
34
Open Questions
  • Advice efficient XOR Lemma for smaller ?
  • For e gt exp(-ka) we get a quasi-polynomial list
    size
  • Can we get an advice efficient hardness
    amplification result using a monotone combination
    function m (instead of ?)?
  • Some results Buresh-Oppenheim, Kabanets,
    Santhanam use monotone list-decodable codes to
    re-prove Trevisans results for amplification
    within NP

35
Thank You
Write a Comment
User Comments (0)
About PowerShow.com