BrainSTARK, Part IV: Next Steps

Congratulations! If you made it to this point, you need to give yourself a one-time pad on the back.

Wait! That’s it – isn’t there any code? But of course there is! There is a fully functional python implementation of the compiler, virtual machine, arithmetization, and STARK prover and verifier. Checkout the github repository. You can use this source code

To run the test:

 $> git clone
 $> cd brainfuck-stark/code/
 $> pypy3
 $>>>> from test_brainfuck_stark import *
 $>>>> test_bfs()

You might want to run it through pypy3 rather than python3 because it’s faster.

Next Steps

So you mastered the mathematics, the computer science, and the cryptography, involved in designing STARK engines. What’s left to do? Here are a couple of suggestions.


You may have noticed that the supporting implementation is really, really slow. Here are some things you might want improve if you want it all to work faster.

  1. Hardcode AIR Polynomials. The symbolic representation of the AIR polynomials – i.e., as multivariate polynomials – benefits reasoning and rapid prototyping, not to mention testing. However, in the end we only care about getting the value of the AIR polynomial as a function of one or two consecutive rows of a table. Depending on the specific constraint, there may be more efficient ways to compute this value than by evaluating a symbolic representation.

  2. Reduce AIR Degree. Asymptotically the bottleneck is the NTT step, which is needed for computing the low-degree extension of codewords. This process transforms the codeword from one of length equal to the table it comes from, to length equal to the FRI evaluation domain length, which is much larger for two reasons: a) composition with the AIR polynomials generates quotient polynomials of large degree, and b) FRI works because the codeword corresponds to a polynomial of degree less than $\rho$ times the length of the codeword, where $0 < \rho < 1$ is the reciprocal of the expansion factor of FRI. While it is possible to set $\rho = 1/2$ in order to reduce the overhead of part (b), this assignment also generates the largest proofs. It is possible to gain much more by introducing new columns in order to reduce the degree of the AIR. In principle it is possible to reduce the degree of the AIR to two, although you might need to introduce a lot of columns for that.

  3. Drop Python. Python is excellent for short programs, for rapid prototyping, and for didactical purposes, but the truth is that it really bad when it comes to performance. A lot of performance can be gained by lifting the STARK engine to a language that operates closer to the hardware, like e.g. Rust. Luckily we also built an implementation in Rust in order to verify the correct execution of Brainfuck programs such as Hello World and The Raven.

Security and Zero-Knowledge

For performance reasons, the security level in the python implementation was set to $\lambda = 2$. All the components are in place for higher security levels, so this is a straightforward change if you have a faster implementation.

Likewise, all the components are in place for producing (and verifying) zero-knowledge proofs, except Brainfuck does not naturally lend itself to programs that make use of this feature. Specifically, the program, the input, and the output, are all known to the verifier. What would it look like if you were to prove that you knew a secret input that, fed into the given program, generates the given output? It’s only a small change away. Or you might want to modify the language to include, say, the operation ? which provides one secret prime field element of input alongside the , operation which provides one public prime field element.

Modify the Instruction Set Architecture

If we are going to add new operations, why not get rid of old ones too? You could go so far as to design a whole new instruction set architecture and then generate and verify STARK proofs for programs in that ISA. If you go down that path, here are some suggestions.

Next up: Attack!
0 - 1 - 2 - 3 - 4 - 5