Complexity theory is a mathematical study of algorithmic problems with regard to how cost-effective solutions they have. Most prominent are the *decision problems*, which are yes-no questions for which answer depends on some further input, for example, “Is \( n\) a prime number?” is a decision problem depending on the input \( n\). I assume that a reader of this post has some familiarity with the basic concepts of this theory, in particular, what PSPACE complexity class is and what does it mean that a problem is PSPACE-complete. A well-known theorem due to Shamir states that the class PSPACE and another class, IP (which I explain below, since I wouldn’t count it as a “basic” concept), are equal. The goal of this post is to prove this theorem. It is not based on a single source, but there is a very detailed exposition due to B.J. Mares, which I shall leave as a reference.

## Interactive proofs

The idea behind a language \( L\) lying in the IP complexity class is that if we have two parties in a conversation: one of them, the *verifier*, has limited resources, and the other, the *prover*, is all-knowing and not contrained by the resources in any way. In their dialogue, the prover tries to convince the verifier that some string \( w\) lies in \( L\). Afterwards, the verifier declares whether they accept or reject the computation, which means that they did or didn’t get convinced that \( w\in L\).

The first idea to define a complexity class out of it is as the class of languages such that some prover is able to convince the verifier iff the string truly is in the language, where we require the verifier to be a polynomial-time machine. However, if the verifier is deterministic, this gives us precisely the class NP: since the prover knows precisely what the verifier is going to do with their responses, they could’ve just as well gave all their responses beforehand, from which we infer this class is contained in NP.

Instead, we let the verifier to be *probabilistic* – in the course of the proof, we allow them to get some perfectly random bits and the course of computation may proceed differently depending on what they are. Also, to account from probabilistic nature, we will allow the verifier to make to make errors, but with small probability. IP is defined as the class of the languages \( L\) such that, for some (polynomial-time) verifier and any string \( w\):

- if \( w\in L\), then some prover makes the verifier accept with probability at least \( 2/3\), and
- if \( w\not\in L\), then there is no prover which makes the verifier accept with probability higher than \( 1/3\).

The exact value of the constants \( 2/3,1/3\) is not important here, as long as they are, respectively, larger than and smaller than \( 1/2\): repeating the calculation a number of times and accepting iff the original verifier would’ve accepted majority of times can put these probabilities at something of order \( 1-2^{-|w|},2^{-|w|}\) or better, which, in practice, would be negligible.

A famous example of a problem which is not known to be in NP, but known to have an interactive proof protocol, is the *graph nonisomorphism*: given two graphs \( G_1,G_2\), decide whether they are *not* isomorphic (note that graph *isomorphism* problem is rather trivially in NP). One way to convince a verifier they are not isomorphic is as follows: first, we ask them to randomly (and secretly – the prover won’t know the result!) choose one of the graphs \( G_i\), randomly permute its vertices getting an isomorphic graph \( H\), and then present \( H\) to the prover to see whether they can figure out what the \( i\) is. If \( G_1\) and \( G_2\) are not isomorphic, then \( H\) is isomorphic to only one of them, so the prover can easily tell which one it’s isomorphic to. On the other hand, if \( G_1\) *is* isomorphic to \( G_2\), the best the prover can do is guess whether \( H\) is a permutation of what \( i\) the verifier chose, which will be right with only 50% probability, so repeating will make it really unlikely for the prover to be right all the time.

The result which we are going to prove, that IP is the same as PSPACE, is surprising because it shows that in the context of interactive proofs, adding randomness significantly increases the capabilities of a system (since, at least conjecturally, PSPACE is much larger than NP). This is in contrary to the more standard complexity classes, since probabilistic analogue of P, called BPP, is conjectured to be equal to P.

We shall now begin the proof of Shamir’s theorem.

## PSPACE-completeness of TQBF

One of the most important decision problems in PSPACE is the following problem:

**TQBF:** Given a quantified Boolean formula

\( \displaystyle\forall x_1\exists x_2\dots\mathsf{Q} x_n:\varphi(x_1,\dots,x_n)\)

where \( \mathsf Q\) is \( \exists\) or \( \forall\) and \( \varphi\) is an unquantified Boolean formula, decide whether it’s true.

This is the most well-known example of a PSPACE-complete problem. Verifying that it is contained in PSPACE is routine, and amounts to checking that the obvious “brute-force” algorithm works in space polynomial in the size of the input. Its PSPACE-hardness procees by a standard argument, which we include for completeness (pun not intended).

Suppose M is a Turing machine which works in polynomial space, say in space bounded by \( p(n)\), where throughout \( n\) is the length of the input. Every configuration of the machine can be described using a sequence of bits of polynomial length, for example by using \( p(n)\) to store the contents of the tape, another \( p(n)\) to indicate which cell the machine is currently reading, and a constant number of bits to store which state machine is in. Until the end of this section, capital letters apart from M will denote configurations of M, and also strings of bits encoding them. This encoding shows that M has at most \( 2^{g(n)}\) configurations for \( g(n)\) polynomial.

We may redesign M slightly so that, once it accepts, it moves to a state we know in advance, e.g. it clears everything on the tape and enters special state while sitting on the leftmost cell. From this and the above we see there are fixed configurations \( X,Y\) such that M accepts iff M gets from configuration \( X\) to \( Y\) in at mosy \( 2^{g(n)}\) steps. We shall recursively construct a quantified Boolean formula \( \varphi_i(A,B)\) which is equivalent to “M gets from configuration \( A\) to configuration \( B\) in at most \( 2^i\) steps”.

For \( i=0\), the formula states that \( A=B\) or \( B\) is one computation step ahead of \( A\). It is more tedious than enlightening to show that formula stating that can be constructed in polynomial time, so we will skip that. For the recursion step, we note that if we can reach configuration \( B\) from \( A\) in at most \( 2^{i+1}\) steps iff we can find a midpoint configuration \( C\) such that we can get from \( A\) to \( C\) in \( 2^i\) steps, and the same for \( C\) and \( B\). Hence a natural idea is to define

\( \displaystyle\varphi_{i+1}(A,B)=\exists C:\varphi_i(A,C)\land\varphi(C,B)\)

(recall \( C\) is treated as a sequence of \( g(n)\) bits, so \( \exists C\) is actually a sequence of \( g(n)\) quantifiers). However, with this idea the length of formulas grows exponentially fast with \( i\), which is bad for us. Instead, we use the following construction:

\( \displaystyle\varphi_{i+1}(A,B)=\exists C\forall P,Q:((P=A\land Q=C)\lor(P=C\land Q=B))\Rightarrow\varphi_i(P,Q)\).

It is straightforward this construction can be done in polynomial time. Transforming the formula into PNF (prenex normal form, i.e. all quantifiers come before a formula) is routine. This establishishes PSPACE-completeness.

## Arithmetization

This is the central idea of the proof. We transform the quantified Boolean formula to a polynomial function, which on Boolean inputs (\( 0,1\)) gives us its Boolean truth value. For the innermost, unquantified formula \( \varphi(x_1,\dots,x_n)\), we first use repeatedly de Morgan rules and other reduction rules, we are left with a formula involving only \( \neg,\land\). Then we note that if formulas \( \varphi_1,\varphi_2\) give polynomials \( f,g\), then \( \varphi_1\land\varphi_2,\neg\varphi_2\) can be represnted by \( fg,1-f\) respectively. Applying quantifiers is not difficult either: if \( f\) corresponds to \( \varphi(x_1,\dots,x_n)\), for \( \forall x_n:\varphi(x_1,\dots,x_n)\) we take \( f(x_1,\dots,x_{n-1},0)f(x_1,\dots,x_{n-1},1)\), and for \( \exists x_n:\varphi(x_1,\dots,x_n)\), using de Morgan laws, \( 1-(1-f(x_1,\dots,x_{n-1},0))(1-f(x_1,\dots,x_{n-1},1))\). This process is known as the *arithmetization* of the Boolean formula. After arithmetizing a formula with each variable quantified, we get a constant polynomial, and the formula is true iff this constant value is 1.

Unfortunately, because of the quantifiers, this construction leaves us with a polynomial with degree exponential in length of the formula. To fix this, we can *linearize* the polynomial. The idea is simple: polynomials \( f(x_1,\dots,x_i,\dots,x_n)\) and \( (1-x_i)f(x_1,\dots,0,\dots,x_n)+x_if(x_1,\dots,1,\dots,x_n)\) takes the same values on Boolean inputs, while the latter polynomial has the degree in every variable at most the same as in the former, and additionaly the degree in \( x_i\) is equal to \( 1\). Thus linearizing in every variable will get all degrees to be \( 1\). We will denote the operator linearizing in variable \( x_i\) by \( L_i\). Denoting also by \( \forall_i,\exists_i\) the operators on polynomials corresponding to applying quantifiers, solving TQBF essentially amounts to finding the value of

\( \forall_1L_1\exists_2L_1L_2\dots \mathsf{Q}_nL_1\dots L_n f(x_1,\dots,x_n)\),

where \( f(x_1,\dots,x_n)\) is a polynomial corresponding to the unquantified part of the formula. Importantly, as can be seen from the construction, it is very easy to find its values on integer inputs.

We define the following sequence of polynomials:

\( \displaystyle f_0()=\forall_1L_1\exists_2L_1L_2\dots \mathsf{Q}_nL_1\dots L_n f(x_1,\dots,x_n)\\

f_1(x_1)=L_1\exists_2L_1L_2\dots \mathsf{Q}_nL_1\dots L_n f(x_1,\dots,x_n)\\

f_2(x_1)=\exists_2L_1L_2\dots \mathsf{Q}_nL_1\dots L_n f(x_1,\dots,x_n)\\

f_3(x_1,x_2)=L_1L_2\dots \mathsf{Q}_nL_1\dots L_n f(x_1,\dots,x_n)\\

\dots\\

f_m(x_1,\dots,x_n)=f(x_1,\dots,x_n)\)

(the empty brackets by \( f_0\) emphasize that it’s a function of zero variables). We want a prover to convince the verifier that \( f_0()=1\), and we can easily compute \( f_m\). Also, note that there is an easy polynomial upper bound on the total degrees of all polynomials, namely the degree \( d\) of \( f_m\) (note that most of them will have degree \( 1\) or \( 2\) in each variable).

## Interactive proof protocol for TQBF

We are now ready to describe the procedure the verifier and prover will execute during the interactive proof. First of all, *all the computations will take place modulo a prime* \( p\) . This will reduce the size of numbers involved in a computation. \( p\) will have length polynomial in the length of the input, and we will also require it to be sufficiently large (such primes will necessarily exist, e.g. by Bertrand’s postulate, but more elementary arguments can be given as well). The prover shall start by sending a prime \( p\), and they will either provide a primality certificate which the verifier can quickly check, or the verifier will have to check primality of \( p\) using a primality test.

Recall that the verifier wants to challenge prover’s claim that \( f_0()=1\). We will indicate these sort of claims by \( f_0^P()=1\) (the superscript indicates that this relation is what the prover *claims* is true). The remainder of the protocol proceeds as follows:

- The prover sends a polynomial \( f_1^P(x)\), which they claim to be equal to \( f_1(x_1)\).
- The verifier checks that provers two claims are
*consistent*: in this case, since \( f_0()=\forall_1 f_1(x)=f_1(0)f_1(1)\), the verifier ought to check that \( f_1^P(0)f_1^P(1)=f_0^P()\). If this is the case, then they choose a random number \( r_1\) modulo \( p\) and sends it to the prover. If prover’s claim that \( f_1^P(x)=f_1(x)\) was true, then in particular we must have \( f_1^P(r_1)=f_1(r_1)\). This is a new claim which the verifier is challenging. - The prover now sends a polynomial \( f_2^P(x)\), which they claim to be \( f_2(x)\).
- The verifier again checks consistency: we should have \( f_1^P(r_1)=(1-r_1)f_2(0)+r_1f_2(1)\). They choose a random \( r_2\), send it to the prover and challenge the equality \( f_2^P(r_2)=f_2(r_2)\).
- This time the prover sends \( f_3^P(r_2,x)\), claiming it’s \( f_3(r_2,x)\).
- They proceed in this manner total of \( m\) times: depending on the operator at the end of definition of \( f_i\), the prover sends a polynomial \( f_{i+1}^P\) with one free variable. The verifier makes a consistency check, chooses a random number and sets this as the free variable in the polynomial, and challenges the prover with the value they get.
- After all these turns, the verifier is now left with prover’s claim that \( f_m^P(q_1,\dots,q_n)=f_m(q_1,\dots,q_n)\). But at this point the verifier can check this claim by themself: we have earlier noted that the values of \( f_m\) can be easily computed. This is one of the two final checks; the other one will be to check that all polynomials given by the prover have degree \( d\).
- If all of the consistency checks were successful and the final claim was verified to be true, the verifier accepts. If at any point the check failed or the last claim turned out to be false, they reject.

It is clear that if the formula is true, then the prover can convince the verifier of its truth using this protocol 100% of the time: if they send the true values of \( f_i\) as \( f_i^P\), the verifier will not find any inconsistencies, so will accept. This is refered to as *completeness* of the protocol – that in true cases verifier can be convinced with high probability (in general, not necessarily 100%, but in this case we can do that good). We now must verify its *soundness* – that it is not possible for a prover to fool the verifier into believing a false formula is true.

The fundamental fact of use here is the following fundamental fact about polynomials over fields (integers modulo a prime form a field, so it is applicable):

**Lagrange’s theorem:** If two polynomials \( f,g\) defined over a field have degree at most \( d\) and agree on more than \( d\) values, then they are equal.

We will upper bound the probability of fooling the verifier. If the deceitful prover wants to have at least some chance, they need to send \( p\) which is really a prime of desired size and all the polynomials on the way have to have the degree at most \( d\).

For the verifier to accept the final check, we need the two polynomials \( f_m,f_m^P\), seen as polynomials in one variable, to agree on the input \( x=q_n\) (recall that the prover sends the polynomials with all but one variable fixed). By Lagrange’s theorem, unless the polynomials are equal, this only can be for at most \( d\) values of \( x\). Since \( q_n\) was chosen randomly, if the polynomials are not equal, then the equality holds with probability smaller than \( \frac{d}{p}\).

If the formula is not true, then the prover must have lied in the first step – it cannot be that \( f_0^P=f_0\). We can now proceed inductively to see that most likely the prover will have to continue providing false claims – if they have falsely claimed that \( f_i^P=f_i\) (here evaluated at certain values of \( r_i\)), then they must provide a polynomial \( f_{i+1}^P(x)\) which isn’t equal to \( f_{i+1}(x)\) – after all, \( f_i^P\) and \( f_i\) involve their evaluations on certain values – so the claim \( f_{i+1}^P(r_{i+1})=f_{i+1}(r_{i+1})\) will be true only on places where the two distinct polynomials agree, which happens with probability at most \( \frac{d}{p}\).

Summing up these probabilities, we see that the probability of the prover getting away with their initial lie is at most \( \frac{(m+1)d}{p}\), where \( (m+1)d\) is polynomial in the length of the formula \( l\). If we now let \( p\) be between \( 2^l,2^{l+1}\) (invoking Bertrand’s postulate), then the probability of the verifier accepting a false formula will be made exponentially small. It’s easy to convince oneself with some technical calculations that this confirms soundness of the protocol.

Therefore, IP=PSPACE. \( \square\)

I hope to eventually make a blog post describing a proof of a similat, but perhaps even more surprising, result, what MIP, the class of problems with interactive proof protocols involving multiple provers, is equal to the class NEXP, which is *known* to be much larger than NP, thus showing that probability can provably enlarge a complexity class.

## Leave a Reply