Summary
Recursive proofs have been deployed on the mainnet to extend StarkEx applications and StarkNet. Recursive proofs enhance scalability, reduce costs, and minimize latency without compromising on scalability and latency. Recursive proofs create conditions for L3 and other innovative applications. Check out the blog post on recursive proofs, it's super cool!
Multiplicative scalability!
Recursive proofs supported by Cairo Universal Computing are now in production. This marks a significant improvement in STARK's L2 scalability. The number of transactions written into Ethereum with a single proof can be quickly multiplied.
So far, STARK proofs have achieved scalability by "aggregating" tens of thousands or even hundreds of thousands of transactions into a single proof written into Ethereum. Through recursion, many such proofs can be "aggregated" into a single proof.
This approach has now been applied to numerous Cairo-based applications running on StarkWare's SaaS extension engine StarkEx and permissionless Rollup StarkNet.
Development Journey So Far
Since the first proof was generated on the mainnet in March 2020, STARK proofs have gone through different stages of development.
STARK Scalability
In June 2020, the first STARK scalability solution, StarkEx, was deployed on the Ethereum mainnet. StarkEx has a prover that can perform large-scale computations off-chain and generate a STARK proof to verify the accuracy of transactions, as well as a verifier on-chain to verify the proof's accuracy. The initial deployment was limited in functionality as it was built from scratch by StarkWare engineers. Eventually, it was decided that a programming language supporting general-purpose computation for proofs was needed, and thus Cairo was born.
Cairo Programming Language
Cairo made its debut on the Ethereum mainnet in the summer of 2020. Cairo, also known as CPU Algebraic Intermediate Representation (AIR), contains a single AIR for verifying the corresponding "CPU" instruction set. Cairo opens the door for encoding proofs of more complex business logic and arbitrary computational statements, and it is faster and more secure. A Cairo program can prove the execution logic of the corresponding application. But a Cairo program can also aggregate multiple such applications, and that's where SHARP comes in.
SHARP
SHARP stands for SHARed ProSver, a shared prover that can aggregate transactions from several independent applications and prove them in a single STARK proof. Applications can combine different batches of transactions to fill the capacity of the STARK proof faster. Both transaction processing speed and latency are improved. Recursive proofs are the next-generation cutting-edge technology that is applicable not only to some hard-coded logic but also to general-purpose computation.
To fully understand the advantages of recursive proofs, it is necessary to further understand how SHARP has executed (non-recursive) proofs so far. Figure 1 illustrates a typical non-recursive flow:
Image
In this flow, propositions arrive one by one. When the capacity (or time) reaches a certain threshold, a large combination proposition (Train) is generated. This combination proposition can only be proven after receiving all individual propositions. This proof takes a long time (approximately the sum of the time required to prove each individual proposition).
Extremely large propositions are ultimately limited by available computing resources such as memory. Before recursion, this was actually a major obstacle to the scalability of STARK proofs.
What is a Recursive Proof?
Through STARK proofs, the time spent proving a proposition is roughly linearly related to the time spent executing the proposition. In addition, if proving a proposition takes time T, then verifying the proof takes approximately log(T) time, which is often referred to as "logarithmic compression." In other words, using STARK allows users to spend much less time verifying propositions compared to computing propositions.
Cairo allows the expression of general-purpose computational propositions that can be proven by STARK and verified by the corresponding STARK verifier.
This is where the opportunity for recursion lies: just as one can write a Cairo program to prove the correctness of thousands of transactions, one can also write a Cairo program to verify multiple STARK proofs. A proof can be generated to verify the validity of multiple "upstream" proofs. This is what we call a recursive proof.
Due to the logarithmic compression and the roughly linear relationship between proof time and proposition time, the time required to prove STARK proofs is relatively short (expected to be only a few minutes in the near future).
When implementing recursion, SHARP can verify propositions as soon as they are received. Proofs can be repeatedly merged into recursive proofs in various patterns until, at some point, the generated proof is submitted to the on-chain verifier contract. Figure 2 shows a typical recursive proof pattern:
Image
In this example, four propositions are sent to SHARP (possibly originating from four different applications). These propositions are individually proven in parallel. Then, each pair of proofs is verified by a recursive verifier proposition (a Cairo program that verifies STARK proofs), resulting in one proof. This proposition verifies that two proofs have been verified. Next, these two proofs are merged again by a recursive verifier proposition. This produces a proof that verifies all four original propositions. This proof can be submitted to the chain and verified by the Solidity verifier smart contract.
Direct Benefits of Recursive Proofs
Reduced on-chain costs
Undoubtedly, we have achieved the "compression" of multiple proofs into one, which means that the on-chain verification cost of each transaction will be significantly lower (each proposition may contain many transactions).
Using recursive proofs can eliminate the computational resource barriers (such as memory) that have limited proof size so far because each proposition has a limited capacity and can be proven individually. Therefore, when using recursion, the capacity of the recursively aggregated propositions (Train) is almost unlimited, and the cost of each transaction can be reduced by several orders of magnitude.
In practice, cost reduction depends on acceptable latency (as well as the speed at which transactions arrive). Additionally, since each proof is usually accompanied by corresponding on-chain data outputs, the amount of data written to the chain together with a single proof is also limited. Nevertheless, reducing costs by an order of magnitude or even further improving performance can be easily achieved.
Reduced latency
The recursive proof pattern can reduce the latency of proving large combination propositions. There are two factors at play:
Received propositions can be proven in parallel (instead of proving a huge combination proposition).
Proofs do not need to wait for the last proposition in the Train to arrive before being proven. Instead, new propositions can be combined with the proof at any time. In other words, the delay of the last proposition joining the Train is roughly the time required to prove the last proposition plus the time required for the recursive verifier proposition (which proves all propositions that have already been "added" to this specific Train).
We are actively developing and optimizing the latency of the recursive verifier proposition. It is expected that the latency of the recursive verifier proposition will reach the order of minutes within a few months. Therefore, efficient SHARP latency can be controlled within minutes to hours, depending on the trade-off between on-chain costs for each transaction. This is a significant improvement in SHARP latency.
Promoting L3 Applications
Recursive verifier propositions developed with Cairo open up the possibility of submitting proofs to StarkNet because these propositions can be written into StarkNet smart contracts. This allows for the deployment of L3 on the StarkNet L2 public network.
The recursive pattern also applies to aggregated proofs from L3, verified by a single proof on L2. Therefore, L3 can also achieve massively scalable solutions.
More Applications
Applying Recursion
Recursion opens up more opportunities for platforms and applications that want to further scale their costs and performance.
Each STARK proof verifies the validity of propositions applied to certain "public inputs" (or "program outputs" in Cairo terminology). Conceptually, STARK recursion compresses two proofs with two inputs into one proof with two inputs. In other words, while the number of proofs is reduced, the number of inputs remains the same. Then, the inputs are usually used by the application to update certain states on L1 (e.g., updating the state root or executing on-chain withdrawals).
If recursive propositions can be aware at the application layer, i.e., identify the semantics of the application itself, then recursive propositions can compress two proofs into one and combine two inputs into one. The final proposition verifies the validity of the input combination based on the semantics of the application, and this is called applicative recursion (see Figure 3 for an example).
Image
Proposition 1 proves the state update from A to B, while Proposition 2 verifies further updates from B to C. The proofs of Proposition 1 and Proposition 2 can be merged into a third proposition that directly proves the update from A to C. By applying similar recursive logic, the cost of state updates can be significantly reduced to meet the final latency requirements.
Another important example of applicative recursion is compressing aggregated data from multiple proofs. For example, for an validity proof Rollup like StarkNet, each storage update on L2 is also transmitted as data to be updated on L1 to ensure data availability. However, there is no need to send multiple updates for the same storage unit because only the transactions that have been verified by proofs can ultimately satisfy data availability. This optimization has been performed within a single StarkNet block. However, through recursive applications, multiple L2 block aggregated data can be compressed. This can significantly reduce costs, reduce L2 block time, without sacrificing the scalability of L1 updates.
It is worth noting that applicative recursion can be combined with the applicative general recursion described earlier. However, these two optimizations are unrelated to each other.
Reduced complexity of on-chain verifiers
The complexity of STARK verifiers depends on the type of propositions used for verification. In particular, for Cairo propositions, the complexity of the verifier depends on specific elements allowed in the Cairo language, more specifically, the supported built-ins (if Cairo is compared to a CPU, then built-ins are equivalent to microcircuits in the CPU: computation is too frequent, so self-optimization is needed).
The Cairo language is constantly evolving and provides more and more useful built-ins. On the other hand, the recursive verifier only needs to use a small subset of built-ins. Therefore, recursive SHARP can successfully support any proposition in Cairo by supporting the full language in the recursive verifier. Specifically, the Solidity verifier on L1 only needs to verify recursive proofs, so the verifier can be limited to a more stable subset of the Cairo language: the L1 verifier does not need to be updated with the latest and most stable built-ins. In other words, as propositions evolve, complex verification is handled by L2, and the L1 verifier only needs to verify simple and stable propositions.
Reduced computational footprint
Before recursion, aggregating multiple propositions into one proof was limited by the size of propositions that can be proven on available computational instances (and the time required to generate such proofs).
With recursion, there is no need to prove such large propositions anymore. Because there are more small and inexpensive computational instances available (although more instances may be required compared to using a large single-chip prover). This makes it possible to deploy prover instances in more physical and virtual environments.
Summary
Recursive proofs for general-purpose computation are now serving multiple production systems, including StarkNet, on the Ethereum mainnet.
As improvements continue, the advantages of recursion will gradually become apparent. The potential of parallel computing will be unleashed, resulting in reduced gas fees, improved latency, and ultimately achieving ultra-high scalability.
The advantages of recursion in terms of cost and latency are exceptionally significant and will also give rise to new opportunities such as L3 and applicative recursion. The recursive verifier will continue to be optimized, and performance and cost-effectiveness will gradually improve.