You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see currently there is a way to import a contract from a different network, but it tries to load all the state in one run, failing if the storage is too large. I suggest a mechanism that allows lazy forks. Instead of loading all the state in one run, the state is downloaded lazily on demand. When some key of the state is accessed it goes to the RPC and download such entry.
This will allow testing large contracts on live state!
The text was updated successfully, but these errors were encountered:
Correct, only 3) needs to be done lazy. The method view_state could be extended such that it receives a prefix, and only loads the state that starts with that prefix. The inner method being called accept the prefix.
The problem, however is that nearcore cannot really tell if fetching by prefix will yield reasonable number of keys and thus the current implementation applies 50kb limit no matter if prefix is specified or now. Yet, with the advances in flat storage, this might be less of an issue nowadays, also, pagination is needed, which might be another way to solve this issue on nearcore side - only allow up to 100 keys per view_state request.
I see currently there is a way to import a contract from a different network, but it tries to load all the state in one run, failing if the storage is too large. I suggest a mechanism that allows lazy forks. Instead of loading all the state in one run, the state is downloaded lazily on demand. When some key of the state is accessed it goes to the RPC and download such entry.
This will allow testing large contracts on live state!
The text was updated successfully, but these errors were encountered: