You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We propose to implement State Snapshots to let Nodes increase synchronization speed, have a fall-back mechanism to restore Blockchain's operations in case of block agreement failure and be able to verify State using hashsums and cryptographic signatures at some points.
On Blockchain, State Snapshot is physically a transaction which holds State's dump hashsums, signatures and a height it was made on. Because hashing the whole StateDump may be time consuming, it should be done asynchronously by all Consensus Nodes and then agreed through consensus mechanism as transaction verification. This means Consensus Nodes should plan beforehand which height would be snapshot to prepare the dump, or have the ability to get state snapshot for particular height from internal storage engine to verify state dump transaction.
The state dump data itself is stored off-chain. It may be stored in NeoFS, requested via P2P from other nodes or distributed via HTTP.
If not willing to use external State dump, a node may re-generate State from genesis block up to defined height and not only verify the checksum, but also compare signed Hashes from snapshot.
Archive Blockchain's "tail"
Nodes that don't want to hold whole blockchain on local storage, but still want to participate in the Neo Network or propose Blocks, can restore state from State Dump, verify it and start accepting only following blocks or even periodically drop the blockchain's "tail" with remaining correct State. It will be enough to participate in Block proposal, verification and all other normal Neo operations.
The text was updated successfully, but these errors were encountered:
Hi @realloc , I think this is a good proposal. Is it feasible? Won't it take a lot of time to generate this snapshot and upload it somewhere?
Do you plan to have backups for different heights or just one 'backup' for with the "highest height"?
Generating and signing such snapshot would take some time. They should be generated asynchronously, once per reasonably big number of locks, say once per day or once per week. Information about "official" snapshots must be in the blockchain. Ideally all of them should be available, not only the last one.
This proposal is closely related to #1284.
State snapshots
We propose to implement State Snapshots to let Nodes increase synchronization speed, have a fall-back mechanism to restore Blockchain's operations in case of block agreement failure and be able to verify State using hashsums and cryptographic signatures at some points.
On Blockchain, State Snapshot is physically a transaction which holds State's dump hashsums, signatures and a height it was made on. Because hashing the whole StateDump may be time consuming, it should be done asynchronously by all Consensus Nodes and then agreed through consensus mechanism as transaction verification. This means Consensus Nodes should plan beforehand which height would be snapshot to prepare the dump, or have the ability to get state snapshot for particular height from internal storage engine to verify state dump transaction.
The state dump data itself is stored off-chain. It may be stored in NeoFS, requested via P2P from other nodes or distributed via HTTP.
If not willing to use external State dump, a node may re-generate State from genesis block up to defined height and not only verify the checksum, but also compare signed Hashes from snapshot.
Archive Blockchain's "tail"
Nodes that don't want to hold whole blockchain on local storage, but still want to participate in the Neo Network or propose Blocks, can restore state from State Dump, verify it and start accepting only following blocks or even periodically drop the blockchain's "tail" with remaining correct State. It will be enough to participate in Block proposal, verification and all other normal Neo operations.
The text was updated successfully, but these errors were encountered: