Hardness Amplification within NP against Deterministic Algorithms - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

Hardness Amplification within NP against Deterministic Algorithms

Description:

Revised Goal: Relate various kinds of hardness assumptions. ... Long line of work: ... Locally, this holds for most neighborhoods of vertices on LHS. ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 35
Provided by: parikshi
Category:

less

Transcript and Presenter's Notes

Title: Hardness Amplification within NP against Deterministic Algorithms


1
Hardness Amplification within NP against
Deterministic Algorithms
2
Why Hardness Amplification
  • Goal Show there are hard problems in NP.
  • Lower bounds out of reach.
  • Cryptography, Derandomization require average
    case hardness.
  • Revised Goal Relate various kinds of hardness
    assumptions.
  • Hardness Amplification Start with mild hardness,
    amplify.

3
Hardness Amplification
  • Generic Amplification Theorem
  • If there are problems in class A that are mildly
    hard for algorithms in Z, then there are problems
    in A that are very hard for Z.

NP, EXP, PSPACE
P/poly, BPP, P
4
PSPACE versus P/poly, BPP
  • Long line of work
  • Theorem If there are problems in PSPACE that are
    worst case hard for P/poly (BPP), then there are
    problems that are ½ ? hard for P/poly(BPP).

Yao, Nisan-Wigderson, Babai-Fortnow-Nisan-Wigderso
n, Impagliazzo, Impagliazzo-Wigderson1,
Impagliazzo-Wigderson2, Sudan-Trevisan-Vadhan,
Trevisan-Vadhan, Impagliazzo-Jaiswal-Kabanets,
Impagliazzo-Jaiswal-Kabanets-Wigderson.
5
NP versus P/poly
  • ODonnell.
  • Theorem If there are problems in NP that are 1 -
    ? hard for P/poly, then there are problems that
    are ½ ? hard.
  • Starts from average-case assumption.
  • Healy-Vadhan-Viola.

6
NP versus BPP
  • Trevisan03.
  • Theorem If there are problems in NP that are 1 -
    ? hard for BPP, then there are problems that are
    ¾ ? hard.

7
NP versus BPP
  • Trevisan05.
  • Theorem If there are problems in NP that are 1 -
    ? hard for BPP, then there are problems that are
    ½ ? hard.
  • BureshOppenheim-Kabanets-Santhanam alternate
    proof via monotone codes.
  • Optimal up to ?.

8
Our resultsAmplification against P.
  • Theorem 1 If there is a problem in NP that is 1
    - ? hard for P, then there is a problem which is
    ¾ ? hard.
  • Theorem 2 If there is a problem in PSPACE that
    is1 - ? hard for P, then there is a problem which
    is ¾ ? hard.
  • Trevisan 1 - ? hardness to 7/8 ? for PSPACE.
  • Goldreich-Wigderson Unconditional hardness for
    EXP against P.

? 1/(log n)100
? 1/n100
9
Outline of This Talk
  • Amplification via Decoding.
  • Deterministic Local Decoding.
  • Amplification within NP.

10
Outline of This Talk
  • Amplification via Decoding.
  • Deterministic Local Decoding.
  • Amplification within NP.

11
Amplification via DecodingTrevisan,
Sudan-Trevisan-Vadhan
1 0 1 1 0 0 1 0 1
1 0 0 1 1 0 0 1 1
1 0 1 1 0 0
1 0 1 1 0 0
Encode
Decode
f
g Wildly hard
Approx. to g
f Mildly hard
12
Amplification via Decoding.
Case Study PSPACE versus BPP.
1 0 1 1 0 0 1 0 1
  • fs table has size 2n.
  • gs table has size 2n2.
  • Encoding in space n100.

1 0 1 1 0 0
Encode
PSPACE
f Mildly hard
g Wildly hard
13
Amplification via Decoding.
Case Study PSPACE versus BPP.
1 0 0 1 1 0 0 1 1
  • Randomized local decoder.
  • List-decoding beyond ¼ error.

1 0 1 1 0 0
Decode
BPP
f
Approx. to g
14
Amplification via Decoding.
Case Study NP versus BPP.
1 0 1 1 0 0 1 0 1
  • g is a monotone function M of f.
  • M is computable in NTIME(n100)
  • M needs to be noise-sensitive.

1 0 1 1 0 0
Encode
NP
f Mildly hard
g Wildly hard
15
Amplification via Decoding.
Case Study NP versus BPP.
  • Randomized local decoder.
  • Monotone codes are bad codes.
  • Can only approximate f.

1 0 0 1 1 0 0 1 1
1 0 1 0 0 0
Decode
BPP
Approx. to f
Approx. to g
16
Outline of This Talk
  • Amplification via Decoding.
  • Deterministic Local Decoding.
  • Amplification within NP.

17
Deterministic Amplification.
Deterministic local decoding?
1 0 0 1 1 0 0 1 1
1 0 1 1 0 0
Decode
P
18
Deterministic Amplification.
Deterministic local decoding?
  • Can force an error on any bit.
  • Need near-linear length encoding.
  • Monotone codes for NP.

1 0 0 1 1 0 0 1 1
2nn100
1 0 1 1 0 0
Decode
2n
P
19
Deterministic Local Decoding
  • up to unique decoding radius.
  • Deterministic local decoding up to 1 - ? from ¾
    ? agreement.
  • Monotone code construction with similar
    parameters.
  • Main tool ABNNR codes GMD decoding.
    Guruswami-Indyk, Akavia-Venkatesan
  • Open Problem Go beyond Unique Decoding.

20
The ABNNR Construction.
  • Expander graph.
  • 2n vertices.
  • Degree n100.

21
The ABNNR Construction.
  • Expander graph.
  • 2n vertices.
  • Degree n100.

1
0
0
1
0
22
The ABNNR Construction.
  • Expander graph.
  • 2n vertices.
  • Degree n100.

1 0 0
1
1 0 1
0
  • Start with a binary code with small distance.
  • Gives a code of large distance over large
    alphabet.

0 0 0
0
1 0 1
1
0 1 0
0
23
Concatenated ABNNR Codes.
Inner code of distance ½.
1 0 0
1 0 1 0 1 1
1
1 0 1
0 1 1 0 0 1
0
  • Binary code of distance ½.
  • GI ¼ error, not local.
  • T 1/8 error, local.

0 0 0
0 0 0 0 0 0
0
1 0 1
1
0 1 1 0 0 1
0 1 0
0 1 0 1 1 0
0
24
Decoding ABNNR Codes.
1 1 1 0 0 1
0 1 0 0 0 1
0 0 1 0 0 0
0 1 0 0 1 1
0 1 1 1 0 0
25
Decoding ABNNR Codes.
1 0 0
1 1 1 0 0 1
  • Decode inner codes.
  • Works if error lt ¼.
  • Fails if error gt ¼.

0 0 1
0 1 0 0 0 1
0 0 0
0 0 1 0 0 0
0 0 1
0 1 0 0 1 1
0 1 0
0 1 1 1 0 0
26
Decoding ABNNR Codes.
1 0 0
1 1 1 0 0 1
Majority vote on the LHS. Trevisan Corrects
1/8 fraction of errors.
0
0 0 1
0 1 0 0 0 1
0
0 0 0
0 0 1 0 0 0
0
0 0 1
1
0 1 0 0 1 1
0 1 0
0 1 1 1 0 0
0
27
GMD decoding Forney67
c 2 0,1
1 0 0
1 1 1 0 0 1
  • If decoding succeeds, error ? 2 0, ¼.
  • If 0 error, confidence is 1.
  • If ¼ error, confidence is 0.
  • c (1 4?).

Could return wrong answer with high confidence
but this requires ? close to ½.
28
GMD Decoding for ABNNR Codes.
GMD decoding Pick threshold, erase, decode.
Non-local. Our approach Weighted Majority. Thm
Corrects ¼ fraction of errors locally.
1 0 0 c1
1 1 1 0 0 1
0 0 1 c2
0 1 0 0 0 1
0 0 0 c3
0 0 1 0 0 0
0 0 1 c4
0 1 0 0 1 1
0 1 0 c5
0 1 1 1 0 0
29
GMD Decoding for ABNNR Codes.
  • Thm GMD decoding corrects ¼ fraction of error.
  • Proof Sketch
  • Globally, good nodes have more confidence than
    bad nodes.
  • Locally, this holds for most neighborhoods of
    vertices on LHS.

1 0 0 c1
1
0 0 1 c2
0
0 0 0 c3
0
0 0 1 c4
1
Proof similar to Expander Mixing Lemma.
0 1 0 c5
0
30
Outline of This Talk
  • Amplification via Decoding.
  • Deterministic Local Decoding.
  • Amplification within NP.
  • Finding an inner monotone code BOKS.
  • Implementing GMD decoding.

31
The BOKS construction.
1 0 1 1 0 0 1 0 1
  • T(x) Sample an r-tuple from x, apply the
    Tribes function.
  • If x, y are balanced, and ?(x,y) gt ?,
    ?(T(x),T(y)) ¼ ½.
  • If x, y are very close, so are T(x), T(y).
  • Decoding brute force.

1 0 1 1 0 0
k
kr
x
T(x)
32
GMD Decoding for Monotone codes.
  • Start with a balanced f, apply concatenated
    ABNNR.
  • Inner decoder returns closest balanced message.
  • Apply GMD decoding.
  • Thm Decoder corrects ¼ fraction of error
    approximately.
  • Analysis becomes harder.

1 0 1 0 c1
1
0 1 1 0 c2
0
1 1 0 0 c3
0
0 1 1 0 c4
1
1 0 1 0 c5
0
33
GMD Decoding for Monotone codes.
  • Inner decoder finds the closest balanced
    message.
  • Assume 0 error Decoder need not return message.
  • Good nodes have few errors, Bad nodes have many.
  • Thm Decoder corrects ¼ fraction of error
    approximately.

1 0 1 0 c1
1
0 1 1 0 c2
0
1 1 0 0 c3
0
0 1 1 0 c4
1
1 0 1 0 c5
0
34
Beyond Unique Decoding
  • Deterministic local list-decoder
  • Set L of machines such that
  • - For any received word
  • Every nearby codeword is computed by some M 2 L.
  • Is this possible?

1 0 0 1 1 0 0 1 1
Thank You!
Write a Comment
User Comments (0)
About PowerShow.com