Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Cells Commitments [WIP - DO NOT MERGE] #424

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

quake
Copy link
Member

@quake quake commented Aug 18, 2023

@quake quake requested a review from a team as a code owner August 18, 2023 01:29
@quake quake requested a review from XuJiandong August 18, 2023 01:29
@contrun
Copy link

contrun commented Aug 21, 2023

Why do we choose MMR over other accumulators/vector commitments? Do we choose MMR because of its trustlessness? It would be great if we can mention the alternative cell commitments implementations, their respective tradeoffs and the reason why do we choose MMR after all.

One of the greatest strength of MMR is that it does not need trusted setup. Vector commitments with trusted setup are absolutely unacceptable to us, right? Have we considered accumulator based on bilinear pairing?

Both the cell status proof generation and commitment update time of MMR are not so good (real world benchmarks needed) and the cell status proofs are not aggregatable (unless using SNARKs). According to Are there cryptographic accumulators without trusted setup? - Alin Tomescu, the only other valid trustless alternative RSA accumulators over class groups. Have we considered using that? What is the performance of RSA accumulators over class groups compared to MMR?

Although current specification leaves space for chain reorganization, the cell commitment algorithm itself seems un-upgradable to me. Should we use some discriminant in the extension field to leave room for future algorithmic upgrade?

@matt-nervos
Copy link

matt-nervos commented Aug 21, 2023

complexity of MMR compared to RSA+class groups seems to be a good argument for MMR. It seems like dumb and unbreakable has been a good approach for blockchains (look at Bitcoin). https://www.michaelstraka.com/posts/classgroups/

As far as upgrade, relying solely on hash functions in a design not subject to change seems to be in line with "What Bitcoin Did". Aggregating state proofs could be a nice feature but the cost of complexity is high and to my understanding, class groups have been a largely ignored corner of cryptography (more eyes make shallow bugs). Interested to see the answer.

Quantum computers will eventually break pairings because they're elliptic curve-based right?

For performance it seems like proof generation would be reasonable for an MMR, commitment update time is a consideration, because it would need to be done by miners to create the next block template. My assumption is serious miners/pools would hold the state and intermediate hashes in memory to expedite the calculation, from there it's just hash iterations (I agree benchmark would be good to know, it could be thousands of them per block once CKB is used at scale). BLAKE2b is ~3 cycles per byte referenced here: https://www.blake2.net/ I did some quick back of the envelope math for reference

Theoretically 2^28 live/dead cells = 268,435,456

Theoretically one block:
500 cells in
1000 cells out

1500*28*64*3= 8,064,000 cycles, assume 4 ghz processor = 8,064,000/4,000,000,000 = 0.002 seconds
Much of this could probably be pre-computed before block templating because a miner will have the tx's when they are proposed.

Additional question: @contrun what's the computational cost of validation to check inclusion in a RSA accumulator? (I'm thinking about another chain checking CKB state at a distance)

@contrun
Copy link

contrun commented Aug 22, 2023

@matt-nervos I think this calculation of cycles applies only to Merkle tree, but not to MMR.

15002864*3= 8,064,000 cycles, assume 4 ghz processor = 8,064,000/4,000,000,000 = 0.002 seconds

Addition of leaves in MMR is quite cheap, as any decent miner should have saved the hash results of the peaks. Adding more leaves involves adding a few new peaks and combining a few peaks to higher ones. I think inserting 1000 new leaves only needs a little bit more than 1000 hashes in the worst case.

what's the computational cost of validation to check inclusion in a RSA accumulator?

The verification process of RSA accumulator is just check the exponentiation to the proof equals the accumulator. So it should be very cheap.

I've got an idea inspired by this calculation. We can have two lists, one for cell creation list, another for cell destroying list. To show that one cell is alive at block height h, we need two proofs, one for membership proof of this cell in the creation list, another for the non-membership proof of this cell in the destroying list. The benefit of two separated list is that both list are now appending only. For membership proofs with MMR, appending only makes commitment updating by batch adding new leaves easier. I don't think current MMR supports non-membership proofs. We may tweak MMR to support non-membership proof (is it possible at all?) or use another vector commitment scheme that supports efficient non-membership proof (maybe just using sparse Merkle tree).

@matt-nervos
Copy link

@contrun how can you do non-membership proof in an MMR?

@contrun
Copy link

contrun commented Aug 22, 2023

@matt-nervos I don't know it yet. I need to do some research. I don't even know if it is possible. I changed some wording of my previous reply. Sorry for the confusion.

@quake
Copy link
Member Author

quake commented Aug 23, 2023

Why do we choose MMR over other accumulators/vector commitments? Do we choose MMR because of its trustlessness?

We chose MMR for the following reasons

  1. no trusted setup
  2. no strong RSA assumption
  3. battle-tested in other blockchain projects ( grin / polkadot / etc...)

If you know of other suitable accumulator algorithms and implementation of production ready rust libraries, please feel free to recommend them, and we can do the benchmark against MMR.

@xcshuan
Copy link

xcshuan commented Aug 24, 2023

Why use MMR to record the commitment of all cells instead of using an SMT to record the commitment of the current live cells (like a state-tree)? Since we already have Light-client, we can attest to the cells commitment (root) at a certain block height.

  • At one block height, a cell has a non-membership proof, and at the next block height, a cell has a membership proof, so the creation time of the cell is known.
  • At one block height, a cell has a membership proof, and at the next block height, a cell has a non-member proof. From this, the consumption time of the cell is known, or the membership proof of the cell in the latest block is provided, which can prove that the cell has not been not consumed.

Wouldn't this be easier to implement and have less cost, why do we need a tree that accumulates all the history cells?

@quake
Copy link
Member Author

quake commented Aug 24, 2023

Why use MMR to record the commitment of all cells instead of using an SMT to record the commitment of the current live cells (like a state-tree)? Since we already have Light-client, we can attest to the cells commitment (root) at a certain block height.

  • At one block height, a cell has a non-membership proof, and at the next block height, a cell has a membership proof, so the creation time of the cell is known.
  • At one block height, a cell has a membership proof, and at the next block height, a cell has a non-member proof. From this, the consumption time of the cell is known, or the membership proof of the cell in the latest block is provided, which can prove that the cell has not been not consumed.

Wouldn't this be easier to implement and have less cost, why do we need a tree that accumulates all the history cells?

Due to algorithmic differences, SMT requires more storage space than MMR (around 2.2x) and has poorer performance in updating commitments (around 3x)

@xcshuan
Copy link

xcshuan commented Aug 24, 2023

Due to algorithmic differences, SMT requires more storage space than MMR (around 2.2x) and has poorer performance in updating commitments (around 3x)

but why not just keep current live cells commitment? if a cell is created long time ago and not be consumed, does the proof path longer than a newer and have be consumed cell?

@quake
Copy link
Member Author

quake commented Aug 24, 2023

but why not just keep current live cells commitment? if a cell is created long time ago and not be consumed, does the proof path longer than a newer and have be consumed cell?

I didn't quite understand this optimization suggestion, could you elaborate a bit more? Essentially there is no big difference between SMT and MMR, they are both merkle trees, except that SMT has sparse leaf nodes and MMR has compact layout, so MMR has better storage space and update time than SMT. Any other optimizations we can make for SMT can also be applied to MMR.

@xcshuan
Copy link

xcshuan commented Aug 24, 2023

I didn't quite understand this optimization suggestion, could you elaborate a bit more? Essentially there is no big difference between SMT and MMR, they are both merkle trees, except that SMT has sparse leaf nodes and MMR has compact layout, so MMR has better storage space and update time than SMT. Any other optimizations we can make for SMT can also be applied to MMR.

This is not an optimization suggestion, but a question, why use an accumulator to record all historical cell records, unlike the state tree, each block only records the live cells of the current block, as we can prove a path with the root of a specific block (by light client, we can prove the validity of a specific block).

@quake
Copy link
Member Author

quake commented Aug 24, 2023

This is not an optimization suggestion, but a question, why use an accumulator to record all historical cell records, unlike the state tree, each block only records the live cells of the current block, as we can prove a path with the root of a specific block (by light client, we can prove the validity of a specific block).

Sorry, I misunderstood your original comment, there is a question: a cell was created in block N, current tip is N + M, and this cell is live, how can we prove its live status if the current tip block only stores the commitment of cells those were created / consumed in block N + M?

@xcshuan
Copy link

xcshuan commented Aug 24, 2023

Sorry, I misunderstood your original comment, there is a question: a cell was created in block N, current tip is N + M, and this cell is live, how can we prove its live status if the current tip block only stores the commitment of cells those were created / consumed in block N + M?

We can combine the commitment of live cells with the commitment of blockHashs (light client), right? Even we use accumulator to store all historical cells, we still need a method to ensure current block is valid.
So if we can use light client mechanism to prove the validity of block at height N, it's easy to combine a block header inclusion proof and cell inclusion proof. And we can use TransactionRoot to prove some specific cell is inclusion in a transaction as input or output.
Maybe I'm misunderstanding what light client protocol can do, but if my understanding is correct, I think it's unnecessary to store all historical cells in a single tree, as we already accumulate all blockHash. In this way, the size of the tree is always consistent with the current state size.

@quake
Copy link
Member Author

quake commented Aug 24, 2023

We can combine the commitment of live cells with the commitment of blockHashs (light client), right? Even we use accumulator to store all historical cells, we still need a method to ensure current block is valid. So if we can use light client mechanism to prove the validity of block at height N, it's easy to combine a block header inclusion proof and cell inclusion proof. And we can use TransactionRoot to prove some specific cell is inclusion in a transaction as input or output. Maybe I'm misunderstanding what light client protocol can do, but if my understanding is correct, I think it's unnecessary to store all historical cells in a single tree, as we already accumulate all blockHash. In this way, the size of the tree is always consistent with the current state size.

If I understand correctly, we also need to provide non-membership proof of all the blocks between N + 1 and N + M, to prove that the cell has not been consumed in between blocks. The proof size increases linearly with M, so I don't think it's a feasible solution.

@xcshuan
Copy link

xcshuan commented Aug 24, 2023

If I understand correctly, we also need to provide non-membership proof of all the blocks between N + 1 and N + M, to prove that the cell has not been consumed in between blocks. The proof size increases linearly with M, so I don't think it's a feasible solution.

No, we can give a simple membership proof to show the cell is still exist in current live cell tree (latest block).

  1. created time: use a blockHash proof + a transaction proof or a non-membership proof at height n-1 and a membership proof at height n.
  2. consumed time: use a blockHash proof + a transaction proof or a membership proof at height n+m-1 and a non-membership proof at height n+m (if the cell is consumed at block n+m).
  3. still live: a membership proof at latest block height (n+m).

so we don't need to accumulate all historical cells, committing current live cells is enough..

@matt-nervos
Copy link

matt-nervos commented Aug 24, 2023

Won't there be some heavy recalculation removing members of the MMR and then re-aligning it? Avoiding this process could be justification for accumulating history. (read until end)

My first idea for the live cell commitment was an ordered SMT, a new cell would occupy the left most unoccupied leaf.

For example a 1->2 spend would place the 1st output in the leaf that was previously occupied by the input (an issue with this idea is outputs from the same transaction will end up far away from each other, but maybe this is ok.. UTXO is sharded state, different outputs are likely functionally different matters.. like a different user's state/coins).

I feel like this construction is better suited to a traditional SMT rather than an optimized one (really not sure why.. maybe it's because the right side of the tree will be completely sparse).

Depth can be reasonably bounded because the cell minimum is 61 CKB. A depth of 32 would require 261,993,005,056 CKB to be issued in order for it to be possible to exhaust its capacity.

In the event that number of inputs exceeds number of outputs, those leaves would be left null until state grew.

I realized after re-reading the thread that no one was advocating for storing (only) live cell commitment in an MMR.. but maybe this is an option. If a similar process as laid out above is implemented with an MMR (placing the new output at the left most empty leaf in the MMR), re-alignment of the MMR can be avoided.

@xcshuan
Copy link

xcshuan commented Aug 25, 2023

  1. created time: use a blockHash proof + a transaction proof or a non-membership proof at height n-1 and a membership proof at height n.

I need to add that using transaction_proof, each node can generate proof (I don't know if it's convenient, but it may be difficult to verify this on other chains for large ckb transactions, as CKB transactions can support up to around 500kb.), but using membership proof or non-membership proof requires an archive node that stores all historical cells tree.

@quake
Copy link
Member Author

quake commented Aug 28, 2023

No, we can give a simple membership proof to show the cell is still exist in current live cell tree (latest block).

I see, in this way it is possible to store only the live cells in the accumulator, and the accumulator can be used to prove the state of the cell at any given block height. This is a very interesting idea, thanks for sharing.

There may be huge advantage in storage space compared to the current approach, and the performance of verifying and generating commitments is improved because the IO is reduced. The current number of all cells vs live cells is 43.43M / 1.48M (mainnet) and 118.55M / 31.37M (testnet), this reduction in data size could theoretically lead to significant performance improvements. I'll do a benchmark to verify this.

And I think we can also use MMR to store the live cells only by using the pruning feature of MMR, it may be more efficient than the SMT, I'll do benchmark on this too.

In addition, if I understand correctly, this approach has a disadvantage: when we want to verify the state of multiple cells, if their generation block height and consumption block height are not the same, then it is impossible to merge them into one merkle proof, for each cell we need to provide a membership proof and non-membership proof. By using all cells accumulator, we can use single merkle proof to verify the state of multiple cells, the proof size is around log2(N) (N = number of all cells), as a comparison, the proof size will be 2 * M * log2(N) (M = number of cells to verify, N = number of live cells) if we use live cells accumulator. For some cross-chain scenarios, we may need to consider this proof size issue.

@xcshuan
Copy link

xcshuan commented Aug 28, 2023

if their generation block height and consumption block height are not the same, then it is impossible to merge them into one merkle proof, for each cell we need to provide a membership proof and non-membership proof.

yes, but by some SNARKs techniques , like folding, I think this issue can be mitigated.

@matt-nervos
Copy link

for each cell we need to provide a membership proof and non-membership proof.

is this how non-membership would be proven?

  1. Value v at MMR index x is proven at block height y
  2. Prove at a block height greater than y, that at MMR index x the value is not v

@quake
Copy link
Member Author

quake commented Aug 30, 2023

is this how non-membership would be proven?

  1. Value v at MMR index x is proven at block height y
  2. Prove at a block height greater than y, that at MMR index x the value is not v

Yes, we can set the value v to the block height of cell was generated, and the block height of proof 2 should be y + 1', in this way we can prove that it is generated at height vand consumed at the heighty + 1`

@matt-nervos
Copy link

matt-nervos commented Aug 30, 2023

Yes, we can set the value v to the block height of cell was generated, and the block height of proof 2 should be y + 1', in this way we can prove that it is generated at height vand consumed at the heighty + 1`

this would mean that all historical cell records are maintained in the MMR though, correct?

@quake
Copy link
Member Author

quake commented Sep 13, 2023

so we don't need to accumulate all historical cells, committing current live cells is enough..

We will use a SMT with height of 3 as an example:

               root
           /            \
          /              \
         0                1
       /   \            /   \
      /     \          /     \
     00      01       10     11
     /\      /\       /\      /\
    /  \    /  \     /  \    /  \
   000 001 010 011  100 101 110 111

First, let's insert an element at position 011, we will denote this state as v0, the nodes data of SMT to be stored is as follows:

v0:
               root
           /
          / 
         0 
           \
            \
            01 
              \
               \
              011

They are stored in key-value form, we need to save values for 4 keys: root-v0 / 0-v0 / 01-v0 / 011-v0. Please note that in actual implementation, we would use shortcut to optimize storage space, but for descriptive purposes, we'll use this non-optimized form throughout.

Next, we insert another element at position 000, denoting this state as v1. The nodes data of SMT to be stored is as follows:

v1:
               root
           /            
          /             
         0              
       /   \            
      /     \          
     00      01       
     /        \       
    /          \     
   000         011  

We need to save values for 4 keys: root-v1 / 0-v1 / 00-v1 / 000-v1. The previous values of keys root-v0 / 0-v0 / 01-v0 / 011-v0 would still be retained in storage. By utilizing the storage's prefix seek feature, we can find the data for 01 and 011 in the current state v1 (as they haven't changed), while also excluding data updated after v0 when we need to generate proof for v0.

Continuing with state updates, we remove the element at position 011, denoting this state as v2. The nodes data of SMT to be stored is as follows:

v2:
               root
           /            
          /             
         0              
       /
      / 
     00
     / 
    /  
   000

While it might seem that we only need to save values for 2 keys: root-v2 / 0-v2 in v2 state, however, we can't directly delete the key-value pairs for 01 / 011 in this state, doing so would lead to errors in later state updates or historical state queries. Instead, we version up the keys for 01 / 011, saving them as 01-v2 / 011-v2 with null values. Thus this design of storage doesn't significantly reduce the space occupied, it only reduces the space of hash value (32 bytes per node), but both the key paths and values corresponding to the historical states still need to be stored in the storage.

I wrote a simple benchmark (https://github.com/quake/dca-bench/tree/master/src/smt_live), depending on the size of the entire SMT accumulator elements, the storage savings are roughly between 8% to 10%, and the read/write performance of the accumulator are almost the same as the implementation of storing all the elements in the SMT accumulator. At the same time, considering the increase in proof size, I don't think this optimization is worth doing. If there's any misunderstanding in my explanation or benchmark, please feel free to correct me.

@xcshuan
Copy link

xcshuan commented Sep 19, 2023

however, we can't directly delete the key-value pairs for 01 / 011 in this state, doing so would lead to errors in later state updates or historical state queries. Instead, we version up the keys for 01 / 011, saving them as 01-v2 / 011-v2 with null values.

If the cells that have been consumed still occupy the state, we will not get much benefit. I didn't know this, I thought that consumed cells could be deleted directly from the state.

In smt_live, what is the key of each cell? cannot get the word position. If we use hash(outpoint, index) as the key, do we still need to retain some data of consumed cells?

And, do all nodes need to store this historical data, or do only those nodes that want to provide complete proof generation capabilities need it? Simple full nodes only need the current live cell state?

@quake
Copy link
Member Author

quake commented Sep 25, 2023

In smt_live, what is the key of each cell? cannot get the word position. If we use hash(outpoint, index) as the key, do we still need to retain some data of consumed cells?

yes, we use hash(outpoint, index) as the key, the position is the 256-bit integer that corresponds to the hash result

@quake
Copy link
Member Author

quake commented Sep 25, 2023

And, do all nodes need to store this historical data, or do only those nodes that want to provide complete proof generation capabilities need it? Simple full nodes only need the current live cell state?

Similar to eth's full / archive node, we only need to keep the live cell state on the full node, the archive node need to store all the historical data.

States that existed before some recent block can be effectively thrown away for full node, however, the above discussion is for nodes that store all data, and we have no plans to implement archive nodes / full nodes, to increase the availability of the network, we would like full nodes to be able to serve historical data.

@xcshuan
Copy link

xcshuan commented Sep 26, 2023

If the topic has been thoroughly explored, I have no problem with the current conclusion.

@matt-nervos
Copy link

If the topic has been thoroughly explored, I have no problem with the current conclusion.

to confirm, the conclusion is that MMR is more viable than SMT?

@janx janx changed the title RFC: Cells Commitments RFC: Cells Commitments [WIP - DO NOT MERGE] Jan 9, 2024
@matt-nervos
Copy link

@quake @janx is it possible to share where this issue is currently? Cell root will be important long term, want to make sure progress continues on resolving outstanding issues so it can mature.

@janx
Copy link
Member

janx commented Feb 10, 2024

I agree live cells commitment can be a useful enhancement, however

  • This was a medium priority issue. The issue was raised by Axon/Godwoken but both have found good enough alternatives. It's priority is further lowered along with Axon;
  • The implications of adding a global state commitment seems to be under-discussed. Most of our discussion were on algorithm details, without considering if it should be included in consensus. I see different opinions in Bitcoin community. (here is a quick link by google) ;
  • Another option is to create and evaluate an application level cell commitment first.

@matt-nervos
Copy link

matt-nervos commented Feb 12, 2024

Thanks @janx for sharing this post. Regarding differing opinions, it looks like this is about level of trust for bootstrapping a node:

"It can be decentralized if the UTXO checkpoint is enforced as part of the block validity. This would require a soft fork to work."

I do think a soft fork is the appropriate goal. In my mind though, the opportunity isn't in faster bootstrapping of nodes, but rather more robust smart contracting by giving scripts running in CKB-VM a view over the global state, as well as providing dApps the ability to prove something about current state.

This passage sums up concerns with this change:

"The main downside would be increased mining complexity because you need to compute the hash of the current ever changing UTXO set to include it in the block. However an hybrid approach of a periodic commitment (every few days / few weeks) would be doable and would not increase by too much the complexity of the block validation process."

We should consider if this should only be required at the start of an epoch. I don't think much is lost in this case and if we find that calculation of the cell root is bottlenecking block templating (or requiring miners to hold state in memory) it seems to be a reasonable compromise. This would also reduce the validation burden on a full node.

Targeting an application-level overlay network like Utreexo initially would help to benchmark these considerations and find any edge cases that could exhaust resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants