Please mail me a preferable time for your talk. We’ll reserve 90 minutes for each talk. More slots will be opened in the future. Also, please reserve a room for your talk by mailing Gizel Maimon (the mail is, as usual, first name dot last name dot weizmann.ac.il).
Today was the last lecture of the course! How would your weekends look like without our Thursday’s circuit lower bounds lectures? Who knows.. Well, don’t forget we have the end of the semester talks ahead of us, and I suggest you’ll visit at least two or three talks out of the ~20. This is a good and somewhat informal chance to hear of classic and important results.
In today’s lecture we continued to talk about Matrix Rigidity. We showed how to construct modest, though state-of-the-art rigid matrices from Algebraic-Geometric codes, using the untouched sub-matrix method. We then defined locally decodable codes (LDC) and locally correctable codes, or LCC (see a recent survey by Sergey Yekhanin), and showed a result by Zeev Dvir, that roughly states that if the generating matrix of a (known construction) of an LDC is not rigid, then there exists an LCC with large dimension (and somewhat nonstandard query and distance parameters). Therefore, an upper bound on the dimension of an LCC with such parameters implies that the generating matrix of known LDC codes are rigid enough to yield linear-circuit lower bounds.
That’s it. I want to thank you all for coming to the lectures and contributed your thoughts and questions to the discussion. I had a great time!
Today we started to talk about Matrix Rigidity. We first described the motivating problem from algebraic circuit complexity, namely, the problem of explicitly constructing a matrix, such that, when viewed as a linear transformation, cannot be computed by a linear-circuit with linear size and logarithmic depth simultaneously.
We then discussed matrix rigidity, showed a couple examples and basic facts and proved Valiant’s theorem, which roughly states that if a matrix is rigid enough, it cannot be computed by a circuit as above. In other words, Valiant reduced the computational problem to an algebraic/combinatorial one (which is always great!).
We saw, however, that the most rigid matrices known are not close to being rigid enough so to imply circuit lower bounds. In fact, we showed that the main argument in the construction of many rigid matrices is based on the fact that if any r by r submatrix has full rank, then the number of changes in the matrix must be at least ~(n^2/r)log(n/r), which is the strongest bound we have on the rigidity of any explicit matrix. The proof we saw is due to Shokrollahi, Spielman and Stemann. The key lemma that got into this came from extremal graph theory, and is known as Zarankiewicz Problem, which we proved (following a proof from Bollobas book, Extremal Graph Theory, page 309).
Next lecture will be the last lecture in the course. We will give examples of concrete (modest, though state-of-the-art) rigid matrices, and present an approach by Zeev Dvir for the construction of better rigid matrices.
Today we proved the structural result on ACC0 circuits by Beigel and Tarui (this line of work was initated by Yao, and has enojyed further results). See also, the web addendum of Arora-Barak. We actually made a slight change in the proof and used small-bias sets instead of pairwise indepedent family of hash functions, so to reduce the fan-in of the Or gates (recall our construction of small-bias sets from Lecture 5). This gives a slight advantage in the parameters, as well as being somewhat simpler and more natural. One additionally change we made was to use the construction of Gopalan, Guruswami and Lipton for the modulus amplifying polynomials.
We also showed how this structural result implies the better-than-naive algorithm for ACC0 CKT-SAT, which was the missing piece in our proof of Williams’ result. So, finally, we finished this part of the course!
The course will have two more lectures, both devoted to Matrix Rigidity. In the next lecture we will learn about Matrix Rigidity and its relation to linear-circuit lower bounds. We will also see some classic results. In the last lecture we will cover a more recent and beautiful result by Zeev Dvir (who was a PhD student at Weizmann).
Those of you who don’t have a lecture to scribe, please mail me your preferred end-of-the-semester talk that you want to scribe.
Due to the harsh weather conditions, tomorrow’s lecture is canceled.
Today we finally proved Williams’ result that unconditionally separates NEXP from ACC0. As emphasized, the idea of the proof is more general and gives a way to transform fast algorithms for CKT-SAT against a class C to a lower bound against C.
The proof today was modulu an important part we will do next time, namely the structural result (initiated by Yao and improved by Beigel-Tarui) for ACC^0 circuits that in turn yields fast CKT-SAT algorithms for ACC^0 circuits.
Next time we will show this result and by that, in fact, prove Williams’ result up to (1) constructions of PRG under hardness assumptions (2) Toda’s Theorem (3) The completeness of the Permanent for Sharp P, and (4) Completeness of Succinct-SAT for NTIME(2^n/n^c). It is very likely that the first three results will appear in the end of the semester talk, so we can safely say we proved Williams’ result assuming no background in complexity, other than the very very basics (and (4), but come on..).
In the past two lectures we proved a beautiful theorem by Impagliazzo and Kabanets that states that a derandomization of ZEROP (aka PIT) will yield some kind of circuit lower bound (either NEXP not in P/poly or the Permanent not in VP). The proof uses many results, old and new, from complexity, such as Toda’s Theorem, the completness of the Permanent for Sharp-P, MA is contained in PH, and the non-deterministic time-hierarchy theorem.
Another central theorem used in the proof is a beautiful theorem by Impagliazzo, Kabanets and Wigderson: if NEXP is contained in P/poly then NEXP=MA. We already seen that the randomized version of NEXP, namely MA_EXP, is not contained in P/poly (essentially due to IP=PSPACE), so NEXP is a natural next goal for P/poly lower bounds. Moreover, note that if MA is replaced with its non-randomized version, NP, then this would have contradict the non-deterministic time-hierarchy theorem. This said, we do not even know how to prove that BPP is strictly contained in NEXP (which is an easier problem as BPP is contained in P/poly). The proof is based on the ingenious “easy witness” method introduced by Kabanets.
We also commented that the proof for the above theorem yields also a proof for the fact the Succinct-SAT has Succinct-witness under the assumption that NEXP is contained in P/poly. We will use this for proving Williams’ result, starting next time. You are invited to read Ryan’s informal description of his result, or even watch Ryan’s online talk.