Skip to content

Schedule for end of the semester talks

Please mail me a preferable time for your talk. We’ll reserve 90 minutes for each talk. More slots will be opened in the future. Also, please reserve a room for your talk by mailing Gizel Maimon (the mail is, as usual, first name dot last name dot weizmann.ac.il).

Lecture 12 (the last one)

Today was the last lecture of the course! How would your weekends look like without our Thursday’s circuit lower bounds lectures?  Who knows.. Well, don’t forget we have the end of the semester talks ahead of us, and I suggest you’ll visit at least two or three talks out of the ~20. This is a good and somewhat informal chance to hear of classic and important results.

In today’s lecture we continued to talk about Matrix Rigidity. We showed how to construct modest, though state-of-the-art rigid matrices from Algebraic-Geometric codes, using the untouched sub-matrix method. We then defined locally decodable codes (LDC) and locally correctable codes, or LCC (see a recent survey by Sergey Yekhanin), and showed a result by Zeev Dvir, that roughly states that if the generating matrix of a (known construction) of an LDC is not rigid, then there exists an LCC with large dimension (and somewhat nonstandard query and distance parameters). Therefore, an upper bound on the dimension of an LCC with such parameters implies that the generating matrix of known LDC codes are rigid enough to yield linear-circuit lower bounds.

That’s it. I want to thank you all for coming to the lectures and contributed your thoughts and questions to the discussion. I had a great time!

Gil

Lecture 11

Today we started to talk about Matrix Rigidity. We first described the motivating problem from algebraic circuit complexity, namely, the problem of explicitly constructing a matrix, such that, when viewed as a linear transformation, cannot be computed by a linear-circuit with linear size and logarithmic depth simultaneously.

We then discussed matrix rigidity, showed a couple examples and basic facts and proved Valiant’s theorem, which roughly states that if a matrix is rigid enough, it cannot be computed by a circuit as above. In other words, Valiant reduced the computational problem to an algebraic/combinatorial one (which is always great!).

We saw, however, that the most rigid matrices known are not close to being rigid enough so to imply circuit lower bounds. In fact, we showed that the main argument in the construction of many rigid matrices is based on the fact that if any r by r submatrix has full rank, then the number of changes in the matrix must be at least ~(n^2/r)log(n/r), which is the strongest bound we have on the rigidity of any explicit matrix. The proof we saw is due to Shokrollahi, Spielman and Stemann. The key lemma that got into this came from extremal graph theory, and is known as Zarankiewicz Problem, which we proved (following a proof from Bollobas book, Extremal Graph Theory, page 309).

Next lecture will be the last lecture in the course. We will give examples of concrete (modest, though state-of-the-art) rigid matrices, and present an approach by Zeev Dvir for the construction of better rigid matrices.

In this lecture we followed Chapter 13.8 for Stasys Jukna’s great book, and Chapter 2 of a survey by Lokam.

Lecture 10

Today we proved the structural result on ACC0 circuits by Beigel and Tarui (this line of work was initated by Yao, and has enojyed further results). See also, the web addendum of Arora-Barak. We actually made a slight change in the proof and used small-bias sets instead of pairwise indepedent family of hash functions, so to reduce the fan-in of the Or gates (recall our construction of small-bias sets from Lecture 5). This gives a slight advantage in the parameters, as well as being somewhat simpler and more natural. One additionally change we made was to use the construction of Gopalan, Guruswami and Lipton for the modulus amplifying polynomials.

We also showed how this structural result implies the better-than-naive algorithm for ACC0 CKT-SAT, which was the missing piece in our proof of Williams’ result. So, finally, we finished this part of the course!

The course will have two more lectures, both devoted to Matrix Rigidity. In the next lecture we will learn about Matrix Rigidity and its relation to linear-circuit lower bounds. We will also see some classic results. In the last lecture we will cover a more recent and beautiful result by Zeev Dvir (who was a PhD student at Weizmann).

Those of you who don’t have a lecture to scribe, please mail me your preferred end-of-the-semester talk that you want to scribe.

No lecture tomorrow

Due to the harsh weather conditions, tomorrow’s lecture is canceled.

Lecture 9

Today we finally proved Williams’ result that unconditionally separates NEXP from ACC0. As emphasized, the idea of the proof is more general and gives a way to transform fast algorithms for CKT-SAT against a class C to a lower bound against C.

The proof today was modulu an important part we will do next time, namely the structural result (initiated by Yao and improved by Beigel-Tarui) for ACC^0 circuits that in turn yields fast CKT-SAT algorithms for ACC^0 circuits.

Next time we will show this result and by that, in fact, prove Williams’ result up to (1) constructions of PRG under hardness assumptions (2) Toda’s Theorem (3) The completeness of the Permanent for Sharp P, and (4) Completeness of Succinct-SAT for NTIME(2^n/n^c). It is very likely that the first three results will appear in the end of the semester talk, so we can safely say we proved Williams’ result assuming no background in complexity, other than the very very basics (and (4), but come on..).

Lectures 7+8

In the past two lectures we proved a beautiful theorem by Impagliazzo and Kabanets that states that a derandomization of ZEROP (aka PIT) will yield some kind of circuit lower bound (either NEXP not in P/poly or the Permanent not in VP). The proof uses many results, old and new, from complexity, such as Toda’s Theorem, the completness of the Permanent for Sharp-P, MA is contained in PH, and the non-deterministic time-hierarchy theorem.

Another central theorem used in the proof is a beautiful theorem by Impagliazzo, Kabanets and Wigderson: if NEXP is contained in P/poly then NEXP=MA. We already seen that the randomized version of NEXP, namely MA_EXP, is not contained in P/poly (essentially due to IP=PSPACE), so NEXP is a natural next goal for P/poly lower bounds. Moreover, note that if MA is replaced with its non-randomized version, NP, then this would have contradict the non-deterministic time-hierarchy theorem. This said, we do not even know how to prove that BPP is strictly contained in NEXP (which is an easier problem as BPP is contained in P/poly). The proof is based on the ingenious “easy witness” method introduced by Kabanets.

We also commented that the proof for the above theorem yields also a proof for the fact the Succinct-SAT has Succinct-witness under the assumption that NEXP is contained in P/poly. We will use this for proving Williams’ result, starting next time. You are invited to read Ryan’s informal description of his result, or even watch Ryan’s online talk.

 

Lecture 6

Today we talked about Hardness-Randomness tradeoffs. We stated, without a proof, that if E requires circuits of size S(n) then one can construct a pseudorandom generator with stretch S(n)^c for some constant c < 1. We talked informally about the proof strategy and mentioned that usually the theorem is proven in two steps. In the first step one constructs the PRG under a stronger assumption, namely, that E cannot be approximated by small circuits (see Theorem 20.6 in Arora-Barak book). Then one shows that the assumption can be weakened to the desired assumption, that is, a computational assumption rather than an approximation one.

A key ingerdient in the second step is local decodable codes, or local list decodable codes (see Chapter 19 in Arora-Barak book). A somewhat more recent construction that doesn’t go through these two steps was given by Umans in a beautiful paper. This paper is one of the papers for the end of the semester talks.

As mentioned, in this course we focus on the “other direction”, namely, showing that derandomization implies circuit lower bounds. We stated a beautiful result by Impagliazzo and Wigderson (see also Theorem 20.16 in Arora-Barak book) that shows that you can get some sort of derandomization under uniform assumptions only (BPP not equal to EXP). Nevertheless, the theorem we started to prove, by Impagliazzo and Kabanets (see Theorem 20.17 in Arora-Barak and references therein) shows that derandomizing ZEROP will yield some circuit lower bound – either NEXP is not in P/poly or the permanent requires super-polynomial arithmetic circuits.

To prove this theorem we took a short detour into interactive proofs, introduced the classes MA, IP and MIP (the latter just for fun, as it equals NEXP – a star complexity class in this course). Using the famous result IP=PSPACE (Theorem 8.19 in Arora-Barak book and references therein). We deduced that PSPACE is contained in P/poly implies PSPACE = MA. We used this to improves upon Meyer to show that if EXP is in P/poly then EXP collapses to MA (and not just to the second level of PH). Finally, we used this last statement to show that MA_EXP (~NEXP + randomness) is not contained in P/poly – this is the best known uniform lower bound for P/poly to my knowledge.

We then switched to talk about succinct problems (such as succinct-SAT) and more generally, about strings that their truth table can be computed by a small circuit. For a much better non-formal description than what I gave, see Williams’ casual tour around a circuit complexity bound (which will be useful also for the rest of the course). We started to prove a theorem by Imapagliazzo, Kabanets and Wigderson (Lemma 2.20 in Arora-Barak) that shows that if NEXP is contained in P/poly then NEXP=EXP (and thus =MA). The proof uses the easy witness method introduced by Kabanets (see his paper, which also appears at the end of the semester talks). Next time we will continue the proof.

Lecture 5

For the past four lectures we studied the basics of circuits and why they are interesting. Today we finally started to talk about the first topic of the course, namely Hardness-Randomness tradeoff.

We began by defining probabilistic Turing machines and defining complexity classes that capture randomness. We further noticed that P is contained in BPP which in turn is contained in PSPACE. In fact, a cool theorem by Sipser (which was later improved by Gacs) shows that BPP is contained in the second level of PH (see Chapter 7 of Arora-Barak, which we followed for this part of the lecture). There is also an alternative proof for this (and more) by Goldreich and Zuckerman.

We then switched to talk about pseudrandom sets or equivalently pseudorandom generators (PRG) (Chapter 20 of Arora-Barak). We saw how a PRG against circuits would allow us to derandomize efficient algorithms. This is a neat applications of circuits – to derandomize uniform algorithms, we need to fool these non-uniform circuits. Next time we will see that circuits play a central role in the construction of PRG as well. We will see that a hardness generator (generator that outputs the truth table of a hard function) yields a pseudorandomness generator. Thus, improving circuit lower bounds is a route for derandomization!

But before going into that, we saw one example of a pseudorandom set, one that fools a very specific algorithm – the algorithm that computes a random parity of its input. This pseudorandom set, introduced by Naor and Naor in the early 90s, is a very important object in theoretical computer science and is called a small-bias set. We saw a construction of such a set, called the Powering Construction, mainly so to give an example of how to construct (and even more importantly – analyze) things that look random (but also because we will use small-bias sets later on). A very good talk available online about small-bias sets was given by Amir Yehudayoff.

Next lecture we will show how to construct a PRG given a hardness generator. However, in this course we are more interested in the other direction of the tradeoff, so we will only sketch the proof and start to cover a bit more theory we need so to prove the other direction, namely, prove that derandomization implies circuit lower bounds!

A great online talk about Hardness-Randomness tradeoffs was given a few years ago by Kabanets. Another more introductive and high level video lecture about pseudorandom generators by Impagliazzo was given a few years ago: part 1 and part 2. For a more formal treatment see an extensive survey about derandomization of complexity classes by Miltersen.

Lecture Notes

The lecture notes are available here. The file will be updated over time. Currently it contains the notes for the first four lectures.