diff --git a/404.html b/404.html index b11c02b8f..4e17be0dc 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index a066a834d..bf682103e 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index 1c2cb298f..05879f861 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 4986be3ea..864079b28 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 357d1e57b..f139d9123 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index a7a6940dd..e850e930f 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 027e999a3..39f90d782 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index b3230ae55..61ade4191 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index aa203152f..5d070f21b 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index c9fb9791d..d6ea1f197 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index 899280ed5..e8488578e 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ @@ -271,7 +271,7 @@
None. This is a simple isolated change.
- +Table of Contents
Start Date | 2023-07-04 |
Description | Update the runtime-host interface to no longer make use of a host-side allocator |
Authors | Pierre Krieger |
Update the runtime-host interface to no longer make use of a host-side allocator.
-The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.
-The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1
, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1
on this pointer in order to free the buffer.
Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1
, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1
.
Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.
-No attempt was made at convincing stakeholders.
-This section contains a list of new host functions to introduce.
-(func $ext_storage_read_version_2
- (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
-(func $ext_default_child_storage_read_version_2
- (param $child_storage_key i64) (param $key i64) (param $value_out i64)
- (param $offset i32) (result i64))
-
-The signature and behaviour of ext_storage_read_version_2
and ext_default_child_storage_read_version_2
is identical to their version 1 counterparts, but the return value has a different meaning.
-The new functions directly return the number of bytes that were written in the value_out
buffer. If the entry doesn't exist, a value of -1
is returned. Given that the host must never write more bytes than the size of the buffer in value_out
, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1
is not ambiguous.
The runtime execution stops with an error if value_out
is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.
(func $ext_storage_next_key_version_2
- (param $key i64) (param $out i64) (return i32))
-(func $ext_default_child_storage_next_key_version_2
- (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32))
-
-The behaviour of these functions is identical to their version 1 counterparts.
-Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an out
parameter containing a pointer-size to the memory location where the host writes the output. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out
.
-These functions return the size, in bytes, of the next key, or 0
if there is no next key. If the size of the next key is larger than the buffer in out
, the bytes of the key that fit the buffer are written to out
and any extra byte that doesn't fit is discarded.
Some notes:
-0
can unambiguously be used to indicate the lack of next key.ext_storage_next_key_version_2
and ext_default_child_storage_next_key_version_2
are typically used in order to enumerate keys that starts with a certain prefix. Given that storage keys are constructed by concatenating hashes, the runtime is expected to know the size of the next key and can allocate a buffer that can fit said key. When the next key doesn't belong to the desired prefix, it might not fit the buffer, but given that the start of the key is written to the buffer anyway this can be detected in order to avoid calling the function a second time with a larger buffer.(func $ext_hashing_keccak_256_version_2
- (param $data i64) (param $out i32))
-(func $ext_hashing_keccak_512_version_2
- (param $data i64) (param $out i32))
-(func $ext_hashing_sha2_256_version_2
- (param $data i64) (param $out i32))
-(func $ext_hashing_blake2_128_version_2
- (param $data i64) (param $out i32))
-(func $ext_hashing_blake2_256_version_2
- (param $data i64) (param $out i32))
-(func $ext_hashing_twox_64_version_2
- (param $data i64) (param $out i32))
-(func $ext_hashing_twox_128_version_2
- (param $data i64) (param $out i32))
-(func $ext_hashing_twox_256_version_2
- (param $data i64) (param $out i32))
-(func $ext_trie_blake2_256_root_version_3
- (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_blake2_256_ordered_root_version_3
- (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_keccak_256_root_version_3
- (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_keccak_256_ordered_root_version_3
- (param $data i64) (param $version i32) (param $out i32))
-(func $ext_default_child_storage_root_version_3
- (param $child_storage_key i64) (param $out i32))
-(func $ext_crypto_ed25519_generate_version_2
- (param $key_type_id i32) (param $seed i64) (param $out i32))
-(func $ext_crypto_sr25519_generate_version_2
- (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
-(func $ext_crypto_ecdsa_generate_version_2
- (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
-
-The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an out
parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine.
(func $ext_default_child_storage_root_version_3
- (param $child_storage_key i64) (param $out i32))
-(func $ext_storage_root_version_3
- (param $out i32))
-
-The behaviour of these functions is identical to their version 1 and version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new versions of these functions accepts an out
parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine.
I have taken the liberty to take the version 1 of these functions as a base rather than the version 2, as a PPP deprecating the version 2 of these functions has previously been accepted: https://github.com/w3f/PPPs/pull/6.
-(func $ext_storage_clear_prefix_version_3
- (param $prefix i64) (param $limit i64) (param $removed_count_out i32)
- (return i32))
-(func $ext_default_child_storage_clear_prefix_version_3
- (param $child_storage_key i64) (param $prefix i64)
- (param $limit i64) (param $removed_count_out i32) (return i32))
-(func $ext_default_child_storage_kill_version_4
- (param $child_storage_key i64) (param $limit i64)
- (param $removed_count_out i32) (return i32))
-
-The behaviour of these functions is identical to their version 2 and 3 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the version 3 and 4 of these functions accepts a removed_count_out
parameter containing the memory location to a 8 bytes buffer where the host writes the number of keys that were removed in little endian. The runtime execution stops with an error if removed_count_out
is outside of the range of the memory of the virtual machine. The functions return 1 to indicate that there are keys remaining, and 0 to indicate that all keys have been removed.
Note that there is an alternative proposal to add new host functions with the same names: https://github.com/w3f/PPPs/pull/7. This alternative doesn't conflict with this one except for the version number. One proposal or the other will have to use versions 4 and 5 rather than 3 and 4.
-(func $ext_crypto_ed25519_sign_version_2
- (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-(func $ext_crypto_sr25519_sign_version_2
- (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-func $ext_crypto_ecdsa_sign_version_2
- (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-(func $ext_crypto_ecdsa_sign_prehashed_version_2
- (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64))
-
-The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an out
parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out
. The signatures are always of a size known at compilation time. On success, these functions return 0
. If the public key can't be found in the keystore, these functions return 1
and do not write anything to out
.
Note that the return value is 0 on success and 1 on failure, while the previous version of these functions write 1 on success (as it represents a SCALE-encoded Some
) and 0 on failure (as it represents a SCALE-encoded None
). Returning 0 on success and non-zero on failure is consistent with common practices in the C programming language and is less surprising than the opposite.
(func $ext_crypto_secp256k1_ecdsa_recover_version_3
- (param $sig i32) (param $msg i32) (param $out i32) (return i64))
-(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3
- (param $sig i32) (param $msg i32) (param $out i32) (return i64))
-
-The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an out
parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out
. The signatures are always of a size known at compilation time. On success, these functions return 0
. On failure, these functions return a non-zero value and do not write anything to out
.
The non-zero value written on failure is:
-These values are equal to the values returned on error by the version 2 (see https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), but incremented by 1 in order to reserve 0 for success.
-(func $ext_crypto_ed25519_num_public_keys_version_1
- (param $key_type_id i32) (return i32))
-(func $ext_crypto_ed25519_public_key_version_2
- (param $key_type_id i32) (param $key_index i32) (param $out i32))
-(func $ext_crypto_sr25519_num_public_keys_version_1
- (param $key_type_id i32) (return i32))
-(func $ext_crypto_sr25519_public_key_version_2
- (param $key_type_id i32) (param $key_index i32) (param $out i32))
-(func $ext_crypto_ecdsa_num_public_keys_version_1
- (param $key_type_id i32) (return i32))
-(func $ext_crypto_ecdsa_public_key_version_2
- (param $key_type_id i32) (param $key_index i32) (param $out i32))
-
-The functions superceded the ext_crypto_ed25519_public_key_version_1
, ext_crypto_sr25519_public_key_version_1
, and ext_crypto_ecdsa_public_key_version_1
host functions.
Instead of calling ext_crypto_ed25519_public_key_version_1
in order to obtain the list of all keys at once, the runtime should instead call ext_crypto_ed25519_num_public_keys_version_1
in order to obtain the number of public keys available, then ext_crypto_ed25519_public_key_version_2
repeatedly.
-The ext_crypto_ed25519_public_key_version_2
function writes the public key of the given key_index
to the memory location designated by out
. The key_index
must be between 0 (included) and n
(excluded), where n
is the value returned by ext_crypto_ed25519_num_public_keys_version_1
. Execution must trap if n
is out of range.
The same explanations apply for ext_crypto_sr25519_public_key_version_1
and ext_crypto_ecdsa_public_key_version_1
.
Host implementers should be aware that the list of public keys (including their ordering) must not change while the runtime is running. This is most likely done by copying the list of all available keys either at the start of the execution or the first time the list is accessed.
-(func $ext_offchain_http_request_start_version_2
- (param $method i64) (param $uri i64) (param $meta i64) (result i32))
-
-The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the request identifier in it, and returning a pointer to it, the version 2 of this function simply returns the newly-assigned identifier to the HTTP request. On failure, this function returns -1
. An identifier of -1
is invalid and is reserved to indicate failure.
(func $ext_offchain_http_request_write_body_version_2
- (param $method i64) (param $uri i64) (param $meta i64) (result i32))
-(func $ext_offchain_http_response_read_body_version_2
- (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64))
-
-The behaviour of these functions is identical to their version 1 counterpart. Instead of allocating a buffer, writing two bytes in it, and returning a pointer to it, the new version of these functions simply indicates what happened:
-ext_offchain_http_request_write_body_version_2
, 0 on success.ext_offchain_http_response_read_body_version_2
, 0 or a non-zero number of bytes on success.These values are equal to the values returned on error by the version 1 (see https://spec.polkadot.network/chap-host-api#defn-http-error), but tweaked in order to reserve positive numbers for success.
-When it comes to ext_offchain_http_response_read_body_version_2
, the host implementers must not read too much data at once in order to not create ambiguity in the returned value. Given that the size of the buffer
is always inferior or equal to 4 GiB, this is not a problem.
(func $ext_offchain_http_response_wait_version_2
- (param $ids i64) (param $deadline i64) (param $out i32))
-
-The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an out
parameter containing the memory location where the host writes the output. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine.
The encoding of the response code is also modified compared to its version 1 counterpart and each response code now encodes to 4 little endian bytes as described below:
-The buffer passed to out
must always have a size of 4 * n
where n
is the number of elements in the ids
.
(func $ext_offchain_http_response_header_name_version_1
- (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
-(func $ext_offchain_http_response_header_value_version_1
- (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
-
-These functions supercede the ext_offchain_http_response_headers_version_1
host function.
Contrary to ext_offchain_http_response_headers_version_1
, only one header indicated by header_index
can be read at a time. Instead of calling ext_offchain_http_response_headers_version_1
once, the runtime should call ext_offchain_http_response_header_name_version_1
and ext_offchain_http_response_header_value_version_1
multiple times with an increasing header_index
, until a value of -1
is returned.
These functions accept an out
parameter containing a pointer-size to the memory location where the header name or value should be written. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out
.
These functions return the size, in bytes, of the header name or header value. If request doesn't exist or is in an invalid state (as documented for ext_offchain_http_response_headers_version_1
) or the header_index
is out of range, a value of -1
is returned. Given that the host must never write more bytes than the size of the buffer in out
, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1
is not ambiguous.
If the buffer in out
is too small to fit the entire header name of value, only the bytes that fit are written and the rest are discarded.
(func $ext_offchain_submit_transaction_version_2
- (param $data i64) (return i32))
-(func $ext_offchain_http_request_add_header_version_2
- (param $request_id i32) (param $name i64) (param $value i64) (result i32))
-
-Instead of allocating a buffer, writing 1
or 0
in it, and returning a pointer to it, the version 2 of these functions return 0
or 1
, where 0
indicates success and 1
indicates failure. The runtime must interpret any non-0
value as failure, but the client must always return 1
in case of failure.
(func $ext_offchain_local_storage_read_version_1
- (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
-
-This function supercedes the ext_offchain_local_storage_get_version_1
host function, and uses an API and logic similar to ext_storage_read_version_2
.
It reads the offchain local storage key indicated by kind
and key
starting at the byte indicated by offset
, and writes the value to the pointer-size indicated by value_out
.
The function returns the number of bytes that were written in the value_out
buffer. If the entry doesn't exist, a value of -1
is returned. Given that the host must never write more bytes than the size of the buffer in value_out
, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1
is not ambiguous.
The runtime execution stops with an error if value_out
is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.
(func $ext_offchain_network_peer_id_version_1
- (param $out i64))
-
-This function writes the PeerId
of the local node to the memory location indicated by out
. A PeerId
is always 38 bytes long.
-The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine.
(func $ext_input_size_version_1
- (return i64))
-(func $ext_input_read_version_1
- (param $offset i64) (param $out i64))
-
-When a runtime function is called, the host uses the allocator to allocate memory within the runtime where to write some input data. These two new host functions provide an alternative way to access the input that doesn't make use of the allocator.
-The ext_input_size_version_1
host function returns the size in bytes of the input data.
The ext_input_read_version_1
host function copies some data from the input data to the memory of the runtime. The offset
parameter indicates the offset within the input data where to start copying, and must be inferior or equal to the value returned by ext_input_size_version_1
. The out
parameter is a pointer-size containing the buffer where to write to.
-The runtime execution stops with an error if offset
is strictly superior to the size of the input data, or if out
is outside of the range of the memory of the virtual machine, even if the amount of data to copy would be 0 bytes.
In addition to the new host functions, this RFC proposes two changes to the runtime-host interface:
+(func (result i64))
.__heap_base
.All the host functions that are being superceded by new host functions are now considered deprecated and should no longer be used. -The following other host functions are similarly also considered deprecated:
+ext_storage_get_version_1
ext_default_child_storage_get_version_1
ext_allocator_malloc_version_1
ext_allocator_free_version_1
ext_offchain_network_state_version_1
This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.
-The API of these new functions was heavily inspired by API used by the C programming language.
-The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.
-It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:
-ext_input_size_version_1
/ext_input_read_version_1
is inherently slower than obtaining a buffer with the entire data due to the two extra function calls and the extra copying. However, given that this only happens once per runtime call, the cost is expected to be negligible.
The ext_crypto_*_public_keys
, ext_offchain_network_state
, and ext_offchain_http_*
host functions are likely slightly slower than their deprecated counterparts, but given that they are used only in offchain workers this is acceptable.
It is unclear how replacing ext_storage_get
with ext_storage_read
and ext_default_child_storage_get
with ext_default_child_storage_read
will impact performances.
It is unclear how the changes to ext_storage_next_key
and ext_default_child_storage_next_key
will impact performances.
After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. -This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.
- -Table of Contents
-This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating NFT collections. The objective is to lower the barrier to entry for artists, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
-The current deposit of 10 DOT for collection creation on the Polkadot Asset Hub presents a significant financial barrier for many artists. By lowering the deposit requirements, we aim to encourage more artists to participate in the Polkadot NFT ecosystem, thereby enriching the diversity and vibrancy of the community and its offerings.
+This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating NFT collection, minting an individual NFT, and lowering it's coresponding metadata and attribute deposit. The objective is to lower the barrier to entry for NFT creators, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
+The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 DOT for metadata and attribute deposit) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub presents a significant financial barrier for many NFT creators. By lowering the deposit requirements, we aim to encourage more NFT creators to participate in the Polkadot NFT ecosystem, thereby enriching the diversity and vibrancy of the community and its offerings.
The actual implementation of the deposit is an arbitrary number coming from Uniques pallet. It is not a result of any economic analysis. This proposal aims to adjust the deposit from constant to dynamic pricing based on the deposit
function with respect to stakeholders.
deposit
function adjusted by correspoding pricing mechansim.Previous discussions have been held within the Polkadot Forum community and with artists expressing their concerns about the deposit amounts. Link.
-This RFC proposes a revision of the deposit constants in the nfts pallet on the Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.
This RFC suggests modifying deposit constants defined in the nfts
pallet on the Polkadot Asset Hub to require a lower deposit. The reduced deposit amount should be determined by the deposit
adjusted by the pricing mechanism (arbitrary number/another pricing function).
Current deposit requirements are as follows:
+Current deposit requirements are as follows
+Looking at the current code structure the currently implemented we can find that the pricing re-uses the logic of how Uniques are defined:
+#![allow(unused)] fn main() { parameter_types! { @@ -2230,9 +1986,11 @@
Explanation pub const NftsAttributeDepositBase: Balance = UniquesAttributeDepositBase::get(); pub const NftsDepositPerByte: Balance = UniquesDepositPerByte::get(); } - -// -parameter_types! { +}
In the existing setup, the Uniques are defined with specific deposit values for different elements:
+-#![allow(unused)] +fn main() { +parameter_types! { pub const UniquesCollectionDeposit: Balance = UNITS / 10; // 1 / 10 UNIT deposit to create a collection pub const UniquesItemDeposit: Balance = UNITS / 1_000; // 1 / 1000 UNIT deposit to mint an item pub const UniquesMetadataDepositBase: Balance = deposit(1, 129); @@ -2240,7 +1998,38 @@
Explanation pub const UniquesDepositPerByte: Balance = deposit(0, 1); } }
The proposed change would modify the deposit constants to require a lower deposit. The reduced deposit amount should be determined by deposit
adjusted by an arbitrary number.
As we can see in the code definition above the current code does not use the deposit
funtion when the collection in the following instances: UniquesCollectionDeposit
and UniquesItemDeposit
.
This proposed modification adjusts the deposits to use the deposit
function instead of using an arbitrary number.
+#![allow(unused)] +fn main() { +parameter_types! { + pub const NftsCollectionDeposit: Balance = deposit(1, 130); + pub const NftsItemDeposit: Balance = deposit(1, 164); + pub const NftsMetadataDepositBase: Balance = deposit(1, 129); + pub const NftsAttributeDepositBase: Balance = deposit(1, 0); + pub const NftsDepositPerByte: Balance = deposit(0, 1); +} +}
Calculations viewed bellow were calculated by using the following repository rfc-pricing. +Polkadot +| Name | Current price implementation | Proposed Modified by using the new deposit function | +|---------------------------|----------------------------------|-------------------------| +| collectionDeposit | 10 DOT | 0.20064 DOT | +| itemDeposit | 0.01 DOT | 0.20081 DOT | +| metadataDepositBase | 0.20129 DOT | 0.20076 DOT | +| attributeDepositBase | 0.2 DOT | 0.2 DOT |
+Similarly the prices for Kusama ecosystem were calculated as: +Kusama: +| Name | Current price implementation | Proposed Price in KSM | +|---------------------------|----------------------------------|---------------------------| +| collectionDeposit | 0.1 KSM | 0.006688 KSM | +| itemDeposit | 0.001 KSM | 0.000167 KSM | +| metadataDepositBase | 0.006709666617 KSM | 0.0006709666617 KSM | +| attributeDepositBase | 0.00666666666 KSM | 0.000666666666 KSM |
+In an effort to further lower barriers to entry and foster greater inclusivity, we propose additional modifications to the pricing structure. These proposed reductions are based on a collaborative and calculated approach, involving the consensus of leading NFT creators within the Polkadot and Kusama Asset Hub communities. The adjustments to deposit amounts are not made arbitrarily. Instead, they are the result of detailed discussions and analyses conducted with prominent NFT creators.
+Proposed Code Adjustments
-#![allow(unused)] fn main() { parameter_types! { @@ -2251,68 +2040,76 @@
Explanation pub const NftsDepositPerByte: Balance = deposit(0, 1); } }
Prices and Proposed Prices on Polkadot Asset Hub: -Scroll right
-| **Name** | **Current price implementation** | **Price if DOT = 5$** | **Price if DOT goes to 50$** | **Proposed Price in DOT** | **Proposed Price if DOT = 5$** | **Proposed Price if DOT goes to 50$**|
-|---------------------------|----------------------------------|------------------------|-------------------------------|---------------------------|----------------------------------|--------------------------------------|
-| collectionDeposit | 10 DOT | 50 $ | 500 $ | 0.20064 DOT | ~1 $ | 10.32$ |
-| itemDeposit | 0.01 DOT | 0.05 $ | 0.5 $ | 0.005 DOT | 0.025 $ | 0.251$ |
-| metadataDepositBase | 0.20129 DOT | 1.00645 $ | 10.0645 $ | 0.0020129 DOT | 0.0100645 $ | 0.100645$ |
-| attributeDepositBase | 0.2 DOT | 1 $ | 10 $ | 0.002 DOT | 0.01 $ | 0.1$ |
-
-Prices and Proposed Prices on Kusama Asset Hub: -Scroll right
-| **Name** | **Current price implementation** | **Price if KSM = 23$** | **Price if KSM goes to 500$** | **Proposed Price in KSM** | **Proposed Price if KSM = 23$** | **Proposed Price if KSM goes to 500$** |
-|---------------------------|----------------------------------|------------------------|-------------------------------|---------------------------|----------------------------------|----------------------------------------|
-| collectionDeposit | 0.1 KSM | 2.3 $ | 50 $ | 0.006688 KSM | 0.154 $ | 3.34 $ |
-| itemDeposit | 0.001 KSM | 0.023 $ | 0.5 $ | 0.000167 KSM | 0.00385 $ | 0.0835 $ |
-| metadataDepositBase | 0.006709666617 KSM | 0.15432183319 $ | 3.3548333085 $ | 0.0006709666617 KSM | 0.015432183319 $ | 0.33548333085 $ |
-| attributeDepositBase | 0.00666666666 KSM | 0.15333333318 $ | 3.333333333 $ | 0.000666666666 KSM | 0.015333333318 $ | 0.3333333333 $ |
-
-
---Note: This is only a proposal for change and can be modified upon additional conversation.
-
Prices and Proposed Prices on Polkadot Asset Hub:
+Polkadot +| Name | Current price implementation | Proposed Prices | +|---------------------------|----------------------------------|---------------------| +| collectionDeposit | 10 DOT | 0.20064 DOT | +| itemDeposit | 0.01 DOT | 0.005 DOT | +| metadataDepositBase | 0.20129 DOT | 0.002 DOT | +| attributeDepositBase | 0.2 DOT | 0.002 DOT |
+Kusama +| Name | Current price implementation | Proposed Price in KSM | +|---------------------------|----------------------------------|---------------------------| +| collectionDeposit | 0.1 KSM | 0.006688 KSM | +| itemDeposit | 0.001 KSM | 0.000167 KSM | +| metadataDepositBase | 0.006709666617 KSM | 0.0006709666617 KSM | +| attributeDepositBase | 0.00666666666 KSM | 0.000666666666 KSM |
+Several innovative proposals have been considered to enhance the network's adaptability and manage deposit requirements more effectively:
+The concept of a weak governance origin, controlled by a consortium like the System Collective, has been proposed. This model would allow for dynamic adjustments of NFT deposit requirements in response to market conditions, adhering to storage deposit norms.
+Enhancements and Concerns:
+Another proposal is to use a mathematical function to regulate deposit prices, initially allowing low prices to encourage participation, followed by a gradual increase to prevent network bloat.
+Refinements:
+This approach suggests pegging the deposit value to a stable currency like the USD, introducing predictability and stability for network users.
+Considerations and Challenges:
+Each of these proposals offers unique advantages and challenges. The optimal approach may involve a combination of these ideas, carefully adjusted to address the specific needs and dynamics of the Polkadot and Kusama networks.
+Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. Highlighted below are cogent points extracted from the discourse on the Polkadot Forum conversation, which provide critical perspectives on the implications of such changes:
---But NFT deposits were chosen somewhat arbitrarily at genesis and it’s a good exercise to re-evaluate them and adapt if they are causing pain and if lowering them has little or no negative side effect (or if the trade-off is worth it). --> joepetrowski
-
--Underestimates mean that state grows faster, although not unbounded - effectively an economic subsidy on activity. Overestimates mean that the state grows slower - effectively an economic depressant on activity. --> rphmeier
-
-+Technical: We want to prevent state bloat, therefore using state should have a cost associated with it. --> joepetrowski
-
The discourse around modifying deposit requirements includes various perspectives:
+Adjusting NFT deposit requirements on Polkadot and Kusama Asset Hubs involves key challenges:
+State Growth and Technical Concerns: Lowering deposit requirements can lead to increased blockchain state size, potentially causing state bloat. This growth needs to be managed to prevent strain on the network's resources and maintain operational efficiency.
+Network Security and Market Response: Reduced deposits might increase transaction volume, potentially bloating the state, thereby impacting network security. Additionally, adapting to the cryptocurrency market's volatility is crucial. The mechanism for setting deposit amounts must be responsive yet stable, avoiding undue complexity for users.
+Economic Impact on Previous Stakeholders: The change could have varied economic effects on previous (before the change) creators, platform operators, and investors. Balancing these interests is essential to ensure the adjustment benefits the ecosystem without negatively impacting its value dynamics. However in the particular case of Polkadot and Kusama Asset Hub this does not pose a concern since there are very few collections currently and thus previous stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42 collections on Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.
+The change is backwards compatible. The prevention of "spam" could be prevented by OpenGov proposal to forceDestoy
list of collections that are not suitable.
The prevention of "spam" could be prevented by OpenGov proposal to forceDestoy
list of collections that are not suitable.
This change is not expected to have a significant impact on the overall performance of the Polkadot Asset Hub. However, monitoring the network closely, especially in the initial stages after implementation, is crucial to identify and mitigate any potential issues.
-Additionally, a supplementary proposal aims to augment the network's adaptability:
---Just from a technical perspective; I think the best we can do is to use a weak governance origin that is controlled by some consortium (ie. System Collective). -This origin could then update the NFT deposits any time the market conditions warrant it - obviously while honoring the storage deposit requirements. -To implement this, we need RFC#12 and the Parameters pallet from @xlc. --> OliverTY
-
This dynamic governance approach would facilitate a responsive and agile economic model for deposit management, ensuring that the network remains accessible and robust in the face of market volatility.
+The primary performance consideration stems from the potential for state bloat due to increased activity from lower deposit requirements. It's vital to monitor and manage this to avoid any negative impact on the chain's performance. Strategies for mitigating state bloat, including efficient data management and periodic reviews of storage requirements, will be essential.
The proposed change aims to enhance the user experience for artists, making Polkadot more accessible and user-friendly.
+The proposed change aims to enhance the user experience for artists, traders and utilizers of Kusama and Polkadot asset hub. Making Polkadot and Kusama more accessible and user-friendly.
The change does not impact compatibility as redeposit
function is already implemented.
There remain unresolved questions regarding the implementation of a function-based pricing model for deposits and the feasibility of linking deposits to a USD(x) value. These aspects require further exploration and discussion to ascertain their viability and potential impact on the ecosystem.
If accepted, this RFC could pave the way for further discussions and proposals aimed at enhancing the inclusivity and accessibility of the Polkadot ecosystem. Future work could also explore having a weak governance origin for deposits as proposed by Oliver.
+We recommend initially lowering the deposit to the suggested levels. Subsequently, based on the outcomes and feedback, we can continue discussions on more complex models such as function-based pricing or currency-linked deposits.
+If accepted, this RFC could pave the way for further discussions and proposals aimed at enhancing the inclusivity and accessibility of the Polkadot ecosystem.
Table of Contents
This RFC proposes changes that enable the use of absolute locations in AccountId derivations, which allows protocols built using XCM to have static account derivations in any runtime, regardless of its position in the family hierarchy.
-These changes would allow protocol builders to leverage absolute locations to maintain the exact same derived account address across all networks in the ecosystem, thus enhancing user experience.
One such protocol, that is the original motivation for this proposal, is InvArch's Saturn Multisig, which gives users a unifying multisig and DAO experience across all XCM connected chains.
-This proposal aims to make it possible to derive accounts for absolute locations, enabling protocols that require the ability to maintain the same derived account in any runtime. This is done by deriving accounts from the hash of described absolute locations, which are static across different destinations.
The same location can be represented in relative form and absolute form like so:
#![allow(unused)] @@ -3384,7 +3181,7 @@
WithCom
DescribeFamily
The
DescribeFamily
location descriptor is part of theHashedDescription
MultiLocation hashing system and exists to describe locations in an easy format for encoding and hashing, so that an AccountId can be derived from this MultiLocation.This implementation contains a match statement that does not match against absolute locations, so changes to it involve matching against absolute locations and providing appropriate descriptions for hashing.
-Drawbacks
+Drawbacks
No drawbacks have been identified with this proposal.
Testing, Security, and Privacy
Tests can be done using simple unit tests, as this is not a change to XCM itself but rather to types defined in
@@ -3402,7 +3199,7 @@xcm-builder
.Unresolved Questions
+Unresolved Questions
Implementation details and overall code is still up to discussion.
Table of Contents
@@ -3435,13 +3232,13 @@Summary
At the moment, we have system_version
field on RuntimeVersion
that derives which state version is used for the
Storage.
We have a use case where we want extrinsics root is derived using StateVersion::V1
. Without defining a new field
under RuntimeVersion
,
we would like to propose adding system_version
that can be used to derive both storage and extrinsic state version.
Since the extrinsic state version is always StateVersion::V0
, deriving extrinsic root requires full extrinsic data.
This would be problematic when we need to verify the extrinsics root if the extrinsic sizes are bigger. This problem is
further explored in https://github.com/polkadot-fellows/RFCs/issues/19
In order to use project specific StateVersion for extrinsic roots, we proposed
an implementation that introduced
parameter to frame_system::Config
but that unfortunately did not feel correct.
@@ -3483,7 +3280,7 @@
There should be no drawbacks as it would replace state_version
with same behavior but documentation should be updated
so that chains know which system_version
to use.
frame_system::Config
but did not feel that
is the correct way of introducing this change.
-I do not have any specific questions about this change at the moment.
IMO, this change is pretty self-contained and there won't be any future work necessary.
@@ -3543,9 +3340,9 @@This RFC proposes a new model for a sustainable on-demand parachain registration, involving a smaller initial deposit and periodic rent payments. The new model considers that on-demand chains may be unregistered and later re-registered. The proposed solution also ensures a quick startup for on-demand chains on Polkadot in such cases.
-With the support of on-demand parachains on Polkadot, there is a need to explore a new, more cost-effective model for registering validation code. In the current model, the parachain manager is responsible for reserving a unique ParaId
and covering the cost of storing the validation code of the parachain. These costs can escalate, particularly if the validation code is large. We need a better, sustainable model for registering on-demand parachains on Polkadot to help smaller teams deploy more easily.
This RFC suggests a new payment model to create a more financially viable approach to on-demand parachain registration. In this model, a lower initial deposit is required, followed by recurring payments upon parachain registration.
This new model will coexist with the existing one-time deposit payment model, offering teams seeking to deploy on-demand parachains on Polkadot a more cost-effective alternative.
@@ -3559,11 +3356,11 @@This RFC proposes a set of changes that will enable the new rent based approach to registering and storing validation code on-chain. The new model, compared to the current one, will require periodic rent payments. The parachain won't be pruned automatically if the rent is not paid, but by permitting anyone to prune the parachain and rewarding the caller, there will be an incentive for the removal of the validation code.
On-demand parachains should still be able to utilize the current one-time payment model. However, given the size of the deposit required, it's highly likely that most on-demand parachains will opt for the new rent-based model.
@@ -3670,7 +3467,7 @@To enable parachain re-registration, we should introduce a new extrinsic in the paras-registrar
pallet that allows this. The logic of this extrinsic will be same as regular registration, with the distinction that it can be called by anyone, and the required deposit will be smaller since it only has to cover for the storage of the validation code.
This RFC does not alter the process of reserving a ParaId
, and therefore, it does not propose reducing it, even though such a reduction could be beneficial.
Even though this RFC doesn't delve into the specifics of the configuration values for parachain registration but rather focuses on the mechanism, configuring it carelessly could lead to potential problems.
Since the validation code hash and head data are not removed when the parachain is pruned but only when the deregister
extrinsic is called, the T::DataDepositPerByte
must be set to a higher value to create a strong enough incentive for removing it from the state.
This RFC does not break compatibility.
Prior discussion on this topic: https://github.com/paritytech/polkadot-sdk/issues/1796
-None at this time.
As noted in this GitHub issue, we want to raise the per-byte cost of on-chain data storage. However, a substantial increase in this cost would make it highly impractical for on-demand parachains to register on Polkadot. @@ -3747,9 +3544,9 @@
Add a metadata digest value (33-byte constant within fixed spec_version
) to Signed Extensions to supplement signer party with proof of correct extrinsic interpretation. The digest value is generated once before release and is well-known and deterministic. The digest mechanism is designed to be modular and flexible. It also supports partial metadata transfer as needed by the signing party's extrinsic decoding mechanism. This considers signing devices potentially limited communication bandwidth and/or memory capacity.
While all blockchain systems support (at least in some sense) offline signing used in air-gapped wallets and lightweight embedded devices, only few allow simultaneously complex upgradeable logic and full message decoding on the cold off-line signer side; Substrate is one of these heartening few, and therefore - we should build on this feature to greatly improve transaction security, and thus in general, network resilience.
As a starting point, it is important to recognise that prudence and due care are naturally required. As we build further reliance on this feature we should be very careful to make sure it works correctly every time so as not to create false sense of security.
@@ -3779,11 +3576,11 @@All chain teams are stakeholders, as implementing this feature would require timely effort on their side and would impact compatibility with older tools.
This feature is essential for all offline signer tools; many regular signing tools might make use of it. In general, this RFC greatly improves security of any network implementing it, as many governing keys are used with offline signers.
Implementing this RFC would remove requirement to maintain metadata portals manually, as task of metadata verification would be effectively moved to consensus mechanism of the chain.
-Detailed description of metadata shortening and digest process is provided in metadata-shortener crate (see cargo doc --open
and examples). Below are presented algorithms of the process.
0x02
- 0xFF
A 1-byte increase in transaction size due to signed extension value. Digest is not included in transferred transaction, only in signing process.
Proposal in this form is not compatible with older tools that do not implement proper MetadataV14 self-descriptive features; those would have to be upgraded to include a new signed extensions field.
This project was developed upon a Polkadot Treasury grant; relevant development links are located in metadata-offline-project repository.
-Propose a way of permuting the availability chunk indices assigned to validators, in the context of recovering available data from systematic chunks, with the purpose of fairly distributing network bandwidth usage.
-Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 validators during an entire session, when favouring availability recovery from systematic chunks.
@@ -3997,9 +3794,9 @@Relay chain node core developers.
-An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the resulting code. @@ -4062,7 +3859,6 @@
core_index
that used to be occupied by a candidate in some parts of the dispute protocol is
very complicated (See appendix A). This RFC assumes that availability-recovery processes initiated during
@@ -4183,7 +3979,7 @@ See comments on the tracking issue and the in-progress PR
-Not applicable.
This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic @@ -4267,20 +4063,20 @@
This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".
Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.
The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.
-The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recently blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.
It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.
If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.
This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.
-Low-level client developers. People interested in accessing the archive of the chain.
-Reading RFC #8 first might help with comprehension, as this RFC is very similar.
Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.
None that I can see.
The content of this section is basically the same as the one in RFC 8.
@@ -4336,7 +4132,7 @@Irrelevant.
Unknown.
-While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch
and BabeApi_nextEpoch
might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?
This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.
@@ -4378,19 +4174,19 @@Currently, substrate runtime use an simple allocator defined by host side. Every runtime MUST import these allocator functions for normal execution. This situation make runtime code not versatile enough.
So this RFC proposes to define a new spec for allocator part to make substrate runtime more generic.
-Since this RFC define a new way for allocator, we now regard the old one as legacy
allocator.
As we all know, since the allocator implementation details are defined by the substrate client, parachain/parathread cannot customize memory allocator algorithm, so the new specification allows the runtime to customize memory allocation, and then export the allocator function according to the specification for the client side to use.
Another benefit is that some new host functions can be designed without allocating memory on the client, which may have potential performance improvements. Also it will help provide a unified and clean specification if substrate runtime support multi-targets(e.g. RISC-V).
There is also a potential benefit. Many programming languages that support compilation to wasm may not be friendly to supporting external allocator. This is beneficial for other programming languages to enter the substrate runtime ecosystem.
The last and most important benefit is that for offchain context execution, the runtime can fully support pure wasm. What this means here is that all imported host functions could not actually be called (as stub functions), then the various verification logic of the runtime can be converted into pure wasm, which provides the possibility for the substrate runtime to run block verification in other environments (such as in browsers and other non-substrate environments).
No attempt was made at convincing stakeholders.
-This section contains a list of functions should be exported by substrate runtime.
We define the spec as version 1, so the following dummy
function v1
MUST be exported to hint
@@ -4429,7 +4225,7 @@
Detail-heavy explanation of the RFC, suitable for explanation to an implementer of the changeset. This should address corner cases in detail and provide justification behind decisions, and provide rationale for how the design meets the solution requirements.
-The allocator inside of the runtime will make code size bigger, but it's not obvious. The allocator inside of the runtime maybe slow down(or speed up) the runtime, still not obvious.
We could ignore these drawbacks since they are not prominent. And the execution efficiency is highly decided by runtime developer. We could not prevent a poor efficiency if developer want to do it.
@@ -4452,7 +4248,7 @@None at this time.
The content discussed with RFC-0004 is basically orthogonal, but it could still be considered together, and it is preferred that this rfc be implmentented first.
@@ -4487,17 +4283,17 @@This RFC proposes lowering the existential deposit requirements on Asset Hub for Polkadot by a factor of 25, from 0.1 DOT to .004 DOT. The objective is to lower the barrier to entry for asset minters to mint a new asset to the entire DOT token holder base, and make Asset Hub on Polkadot a place where everyone can do small asset conversions.
-The current Existential deposit is 0.1 DOT on Asset Hub for Polkadot. While this is not does not appear to be a significant financial barrier for most people (only $0.80), this value makes Asset Hub impractical for Asset Hub Minters, specifically for the case where the Asset Hub Minters wishes to mint a new asset for the entire community of DOT holders (e.g. 1.25MM DOT holders would cost 125K DOT @ $8 = $1MM).
By lowering the existential deposit requirements from 0.1 DOT to 0.004 DOT, the cost of minting to the entire community of DOT holders goes from an unmanagable number [125K DOT, the value of several houses circa December 2023] down to a manageable number [5K DOT, the value of a car circa December 2023].
-asset.mint
.The exact amount of the existential deposit (ED) is proposed to be 0.004 DOT based on
It is assumed that the estimated cost to store a single account is less than 0.004 DOT. If this assumption is challenged by Polkadot Fellows, we request the Fellows provided a empirical determination of what the actual cost of storing a single account is, at present day numbers of DOT Token Holders (approximately 1-2MM) and then to support a factor or 10-1000x growth over the next 5 years. This assumption has been discussed on the forum: Polkadot AssetHub - high NFT collection deposit
First, the cost has to be mapped from DOT into real world USD storage costs of running an Asset Hub on Polkadot node, and the DOT / USD ratio itself has varied widely in the past and will continue to do so in the future. Second, according to this analysis, at present the pragmatic cost of estimating storage is approximated by what it costs to store accounts for 1 or 2 years at most. Underestimates on this cost is believed to be an economic subsidy while overestimates on this cost is believe to be an economic depressant on activity.
Given the relatively underused AssetHub for Polkadot, we believe the correct thing to do is to aim to subsidize Asset Hub activity with a lower ED.
-The primary drawback for subsidize Asset Hub activity with a 25x lower ED is borne by Asset Hub users in the distant future who will pay for the subsidized activity by lowering the ED.
Lowering the ED from 0.004 DOT to 0 DOT would clearly unnecessarily invite account spam attacks common to EVM chains, which have no ED.
@@ -4543,7 +4339,7 @@It is believed that Asset Hub for Kusama can undergo the same logic change without issue.
For Asset Hub for Polkadot, it is extremely desirable that this change be approved in early 2024 with some urgency.
-It is desirable to know the cost to store an account on Asset Hub for Polkadot when the number of accounts is 10MM, 100MM, 1B to better the cost of the subsidy. We do not believe a precise answer to this merits delaying a subsidy at present. However, if approved, we believe once the number of accounts reaches 10MM-25MM or exponential growth is observed, this ED be reevaluated.
If accepted, this RFC could pave the way for other accessibility improvements:
@@ -4552,6 +4348,270 @@Table of Contents
+ +Start Date | 2023-07-04 |
Description | Update the runtime-host interface to no longer make use of a host-side allocator |
Authors | Pierre Krieger |
Update the runtime-host interface to no longer make use of a host-side allocator.
+The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.
+The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1
, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1
on this pointer in order to free the buffer.
Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1
, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1
.
Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.
+No attempt was made at convincing stakeholders.
+This section contains a list of new host functions to introduce.
+(func $ext_storage_read_version_2
+ (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
+(func $ext_default_child_storage_read_version_2
+ (param $child_storage_key i64) (param $key i64) (param $value_out i64)
+ (param $offset i32) (result i64))
+
+The signature and behaviour of ext_storage_read_version_2
and ext_default_child_storage_read_version_2
is identical to their version 1 counterparts, but the return value has a different meaning.
+The new functions directly return the number of bytes that were written in the value_out
buffer. If the entry doesn't exist, a value of -1
is returned. Given that the host must never write more bytes than the size of the buffer in value_out
, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1
is not ambiguous.
The runtime execution stops with an error if value_out
is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.
(func $ext_storage_next_key_version_2
+ (param $key i64) (param $out i64) (return i32))
+(func $ext_default_child_storage_next_key_version_2
+ (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32))
+
+The behaviour of these functions is identical to their version 1 counterparts.
+Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an out
parameter containing a pointer-size to the memory location where the host writes the output. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out
.
+These functions return the size, in bytes, of the next key, or 0
if there is no next key. If the size of the next key is larger than the buffer in out
, the bytes of the key that fit the buffer are written to out
and any extra byte that doesn't fit is discarded.
Some notes:
+0
can unambiguously be used to indicate the lack of next key.ext_storage_next_key_version_2
and ext_default_child_storage_next_key_version_2
are typically used in order to enumerate keys that starts with a certain prefix. Given that storage keys are constructed by concatenating hashes, the runtime is expected to know the size of the next key and can allocate a buffer that can fit said key. When the next key doesn't belong to the desired prefix, it might not fit the buffer, but given that the start of the key is written to the buffer anyway this can be detected in order to avoid calling the function a second time with a larger buffer.(func $ext_hashing_keccak_256_version_2
+ (param $data i64) (param $out i32))
+(func $ext_hashing_keccak_512_version_2
+ (param $data i64) (param $out i32))
+(func $ext_hashing_sha2_256_version_2
+ (param $data i64) (param $out i32))
+(func $ext_hashing_blake2_128_version_2
+ (param $data i64) (param $out i32))
+(func $ext_hashing_blake2_256_version_2
+ (param $data i64) (param $out i32))
+(func $ext_hashing_twox_64_version_2
+ (param $data i64) (param $out i32))
+(func $ext_hashing_twox_128_version_2
+ (param $data i64) (param $out i32))
+(func $ext_hashing_twox_256_version_2
+ (param $data i64) (param $out i32))
+(func $ext_trie_blake2_256_root_version_3
+ (param $data i64) (param $version i32) (param $out i32))
+(func $ext_trie_blake2_256_ordered_root_version_3
+ (param $data i64) (param $version i32) (param $out i32))
+(func $ext_trie_keccak_256_root_version_3
+ (param $data i64) (param $version i32) (param $out i32))
+(func $ext_trie_keccak_256_ordered_root_version_3
+ (param $data i64) (param $version i32) (param $out i32))
+(func $ext_default_child_storage_root_version_3
+ (param $child_storage_key i64) (param $out i32))
+(func $ext_crypto_ed25519_generate_version_2
+ (param $key_type_id i32) (param $seed i64) (param $out i32))
+(func $ext_crypto_sr25519_generate_version_2
+ (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
+(func $ext_crypto_ecdsa_generate_version_2
+ (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
+
+The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an out
parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine.
(func $ext_default_child_storage_root_version_3
+ (param $child_storage_key i64) (param $out i32))
+(func $ext_storage_root_version_3
+ (param $out i32))
+
+The behaviour of these functions is identical to their version 1 and version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new versions of these functions accepts an out
parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine.
I have taken the liberty to take the version 1 of these functions as a base rather than the version 2, as a PPP deprecating the version 2 of these functions has previously been accepted: https://github.com/w3f/PPPs/pull/6.
+(func $ext_storage_clear_prefix_version_3
+ (param $prefix i64) (param $limit i64) (param $removed_count_out i32)
+ (return i32))
+(func $ext_default_child_storage_clear_prefix_version_3
+ (param $child_storage_key i64) (param $prefix i64)
+ (param $limit i64) (param $removed_count_out i32) (return i32))
+(func $ext_default_child_storage_kill_version_4
+ (param $child_storage_key i64) (param $limit i64)
+ (param $removed_count_out i32) (return i32))
+
+The behaviour of these functions is identical to their version 2 and 3 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the version 3 and 4 of these functions accepts a removed_count_out
parameter containing the memory location to a 8 bytes buffer where the host writes the number of keys that were removed in little endian. The runtime execution stops with an error if removed_count_out
is outside of the range of the memory of the virtual machine. The functions return 1 to indicate that there are keys remaining, and 0 to indicate that all keys have been removed.
Note that there is an alternative proposal to add new host functions with the same names: https://github.com/w3f/PPPs/pull/7. This alternative doesn't conflict with this one except for the version number. One proposal or the other will have to use versions 4 and 5 rather than 3 and 4.
+(func $ext_crypto_ed25519_sign_version_2
+ (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
+(func $ext_crypto_sr25519_sign_version_2
+ (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
+func $ext_crypto_ecdsa_sign_version_2
+ (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
+(func $ext_crypto_ecdsa_sign_prehashed_version_2
+ (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64))
+
+The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an out
parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out
. The signatures are always of a size known at compilation time. On success, these functions return 0
. If the public key can't be found in the keystore, these functions return 1
and do not write anything to out
.
Note that the return value is 0 on success and 1 on failure, while the previous version of these functions write 1 on success (as it represents a SCALE-encoded Some
) and 0 on failure (as it represents a SCALE-encoded None
). Returning 0 on success and non-zero on failure is consistent with common practices in the C programming language and is less surprising than the opposite.
(func $ext_crypto_secp256k1_ecdsa_recover_version_3
+ (param $sig i32) (param $msg i32) (param $out i32) (return i64))
+(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3
+ (param $sig i32) (param $msg i32) (param $out i32) (return i64))
+
+The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an out
parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out
. The signatures are always of a size known at compilation time. On success, these functions return 0
. On failure, these functions return a non-zero value and do not write anything to out
.
The non-zero value written on failure is:
+These values are equal to the values returned on error by the version 2 (see https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), but incremented by 1 in order to reserve 0 for success.
+(func $ext_crypto_ed25519_num_public_keys_version_1
+ (param $key_type_id i32) (return i32))
+(func $ext_crypto_ed25519_public_key_version_2
+ (param $key_type_id i32) (param $key_index i32) (param $out i32))
+(func $ext_crypto_sr25519_num_public_keys_version_1
+ (param $key_type_id i32) (return i32))
+(func $ext_crypto_sr25519_public_key_version_2
+ (param $key_type_id i32) (param $key_index i32) (param $out i32))
+(func $ext_crypto_ecdsa_num_public_keys_version_1
+ (param $key_type_id i32) (return i32))
+(func $ext_crypto_ecdsa_public_key_version_2
+ (param $key_type_id i32) (param $key_index i32) (param $out i32))
+
+The functions superceded the ext_crypto_ed25519_public_key_version_1
, ext_crypto_sr25519_public_key_version_1
, and ext_crypto_ecdsa_public_key_version_1
host functions.
Instead of calling ext_crypto_ed25519_public_key_version_1
in order to obtain the list of all keys at once, the runtime should instead call ext_crypto_ed25519_num_public_keys_version_1
in order to obtain the number of public keys available, then ext_crypto_ed25519_public_key_version_2
repeatedly.
+The ext_crypto_ed25519_public_key_version_2
function writes the public key of the given key_index
to the memory location designated by out
. The key_index
must be between 0 (included) and n
(excluded), where n
is the value returned by ext_crypto_ed25519_num_public_keys_version_1
. Execution must trap if n
is out of range.
The same explanations apply for ext_crypto_sr25519_public_key_version_1
and ext_crypto_ecdsa_public_key_version_1
.
Host implementers should be aware that the list of public keys (including their ordering) must not change while the runtime is running. This is most likely done by copying the list of all available keys either at the start of the execution or the first time the list is accessed.
+(func $ext_offchain_http_request_start_version_2
+ (param $method i64) (param $uri i64) (param $meta i64) (result i32))
+
+The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the request identifier in it, and returning a pointer to it, the version 2 of this function simply returns the newly-assigned identifier to the HTTP request. On failure, this function returns -1
. An identifier of -1
is invalid and is reserved to indicate failure.
(func $ext_offchain_http_request_write_body_version_2
+ (param $method i64) (param $uri i64) (param $meta i64) (result i32))
+(func $ext_offchain_http_response_read_body_version_2
+ (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64))
+
+The behaviour of these functions is identical to their version 1 counterpart. Instead of allocating a buffer, writing two bytes in it, and returning a pointer to it, the new version of these functions simply indicates what happened:
+ext_offchain_http_request_write_body_version_2
, 0 on success.ext_offchain_http_response_read_body_version_2
, 0 or a non-zero number of bytes on success.These values are equal to the values returned on error by the version 1 (see https://spec.polkadot.network/chap-host-api#defn-http-error), but tweaked in order to reserve positive numbers for success.
+When it comes to ext_offchain_http_response_read_body_version_2
, the host implementers must not read too much data at once in order to not create ambiguity in the returned value. Given that the size of the buffer
is always inferior or equal to 4 GiB, this is not a problem.
(func $ext_offchain_http_response_wait_version_2
+ (param $ids i64) (param $deadline i64) (param $out i32))
+
+The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an out
parameter containing the memory location where the host writes the output. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine.
The encoding of the response code is also modified compared to its version 1 counterpart and each response code now encodes to 4 little endian bytes as described below:
+The buffer passed to out
must always have a size of 4 * n
where n
is the number of elements in the ids
.
(func $ext_offchain_http_response_header_name_version_1
+ (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
+(func $ext_offchain_http_response_header_value_version_1
+ (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
+
+These functions supercede the ext_offchain_http_response_headers_version_1
host function.
Contrary to ext_offchain_http_response_headers_version_1
, only one header indicated by header_index
can be read at a time. Instead of calling ext_offchain_http_response_headers_version_1
once, the runtime should call ext_offchain_http_response_header_name_version_1
and ext_offchain_http_response_header_value_version_1
multiple times with an increasing header_index
, until a value of -1
is returned.
These functions accept an out
parameter containing a pointer-size to the memory location where the header name or value should be written. The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out
.
These functions return the size, in bytes, of the header name or header value. If request doesn't exist or is in an invalid state (as documented for ext_offchain_http_response_headers_version_1
) or the header_index
is out of range, a value of -1
is returned. Given that the host must never write more bytes than the size of the buffer in out
, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1
is not ambiguous.
If the buffer in out
is too small to fit the entire header name of value, only the bytes that fit are written and the rest are discarded.
(func $ext_offchain_submit_transaction_version_2
+ (param $data i64) (return i32))
+(func $ext_offchain_http_request_add_header_version_2
+ (param $request_id i32) (param $name i64) (param $value i64) (result i32))
+
+Instead of allocating a buffer, writing 1
or 0
in it, and returning a pointer to it, the version 2 of these functions return 0
or 1
, where 0
indicates success and 1
indicates failure. The runtime must interpret any non-0
value as failure, but the client must always return 1
in case of failure.
(func $ext_offchain_local_storage_read_version_1
+ (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
+
+This function supercedes the ext_offchain_local_storage_get_version_1
host function, and uses an API and logic similar to ext_storage_read_version_2
.
It reads the offchain local storage key indicated by kind
and key
starting at the byte indicated by offset
, and writes the value to the pointer-size indicated by value_out
.
The function returns the number of bytes that were written in the value_out
buffer. If the entry doesn't exist, a value of -1
is returned. Given that the host must never write more bytes than the size of the buffer in value_out
, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1
is not ambiguous.
The runtime execution stops with an error if value_out
is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.
(func $ext_offchain_network_peer_id_version_1
+ (param $out i64))
+
+This function writes the PeerId
of the local node to the memory location indicated by out
. A PeerId
is always 38 bytes long.
+The runtime execution stops with an error if out
is outside of the range of the memory of the virtual machine.
(func $ext_input_size_version_1
+ (return i64))
+(func $ext_input_read_version_1
+ (param $offset i64) (param $out i64))
+
+When a runtime function is called, the host uses the allocator to allocate memory within the runtime where to write some input data. These two new host functions provide an alternative way to access the input that doesn't make use of the allocator.
+The ext_input_size_version_1
host function returns the size in bytes of the input data.
The ext_input_read_version_1
host function copies some data from the input data to the memory of the runtime. The offset
parameter indicates the offset within the input data where to start copying, and must be inferior or equal to the value returned by ext_input_size_version_1
. The out
parameter is a pointer-size containing the buffer where to write to.
+The runtime execution stops with an error if offset
is strictly superior to the size of the input data, or if out
is outside of the range of the memory of the virtual machine, even if the amount of data to copy would be 0 bytes.
In addition to the new host functions, this RFC proposes two changes to the runtime-host interface:
+(func (result i64))
.__heap_base
.All the host functions that are being superceded by new host functions are now considered deprecated and should no longer be used. +The following other host functions are similarly also considered deprecated:
+ext_storage_get_version_1
ext_default_child_storage_get_version_1
ext_allocator_malloc_version_1
ext_allocator_free_version_1
ext_offchain_network_state_version_1
This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.
+The API of these new functions was heavily inspired by API used by the C programming language.
+The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.
+It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:
+ext_input_size_version_1
/ext_input_read_version_1
is inherently slower than obtaining a buffer with the entire data due to the two extra function calls and the extra copying. However, given that this only happens once per runtime call, the cost is expected to be negligible.
The ext_crypto_*_public_keys
, ext_offchain_network_state
, and ext_offchain_http_*
host functions are likely slightly slower than their deprecated counterparts, but given that they are used only in offchain workers this is acceptable.
It is unclear how replacing ext_storage_get
with ext_storage_read
and ext_default_child_storage_get
with ext_default_child_storage_read
will impact performances.
It is unclear how the changes to ext_storage_next_key
and ext_default_child_storage_next_key
will impact performances.
After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. +This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.
Table of Contents
This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating NFT collections. The objective is to lower the barrier to entry for artists, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
+This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating NFT collection, minting an individual NFT, and lowering it's coresponding metadata and attribute deposit. The objective is to lower the barrier to entry for NFT creators, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.
The current deposit of 10 DOT for collection creation on the Polkadot Asset Hub presents a significant financial barrier for many artists. By lowering the deposit requirements, we aim to encourage more artists to participate in the Polkadot NFT ecosystem, thereby enriching the diversity and vibrancy of the community and its offerings.
+The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 DOT for metadata and attribute deposit) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub presents a significant financial barrier for many NFT creators. By lowering the deposit requirements, we aim to encourage more NFT creators to participate in the Polkadot NFT ecosystem, thereby enriching the diversity and vibrancy of the community and its offerings.
The actual implementation of the deposit is an arbitrary number coming from Uniques pallet. It is not a result of any economic analysis. This proposal aims to adjust the deposit from constant to dynamic pricing based on the deposit
function with respect to stakeholders.
Previous discussions have been held within the Polkadot Forum community and with artists expressing their concerns about the deposit amounts. Link.
This RFC proposes a revision of the deposit constants in the nfts pallet on the Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.
This RFC suggests modifying deposit constants defined in the nfts
pallet on the Polkadot Asset Hub to require a lower deposit. The reduced deposit amount should be determined by the deposit
adjusted by the pricing mechanism (arbitrary number/another pricing function).
Current deposit requirements are as follows:
+Current deposit requirements are as follows
+Looking at the current code structure the currently implemented we can find that the pricing re-uses the logic of how Uniques are defined:
+#![allow(unused)] fn main() { parameter_types! { @@ -238,9 +258,11 @@
Explanation
pub const NftsAttributeDepositBase: Balance = UniquesAttributeDepositBase::get(); pub const NftsDepositPerByte: Balance = UniquesDepositPerByte::get(); } - -// -parameter_types! { +}
In the existing setup, the Uniques are defined with specific deposit values for different elements:
+-#![allow(unused)] +fn main() { +parameter_types! { pub const UniquesCollectionDeposit: Balance = UNITS / 10; // 1 / 10 UNIT deposit to create a collection pub const UniquesItemDeposit: Balance = UNITS / 1_000; // 1 / 1000 UNIT deposit to mint an item pub const UniquesMetadataDepositBase: Balance = deposit(1, 129); @@ -248,7 +270,38 @@
Explanation
pub const UniquesDepositPerByte: Balance = deposit(0, 1); } }
The proposed change would modify the deposit constants to require a lower deposit. The reduced deposit amount should be determined by deposit
adjusted by an arbitrary number.
As we can see in the code definition above the current code does not use the deposit
funtion when the collection in the following instances: UniquesCollectionDeposit
and UniquesItemDeposit
.
This proposed modification adjusts the deposits to use the deposit
function instead of using an arbitrary number.
+#![allow(unused)] +fn main() { +parameter_types! { + pub const NftsCollectionDeposit: Balance = deposit(1, 130); + pub const NftsItemDeposit: Balance = deposit(1, 164); + pub const NftsMetadataDepositBase: Balance = deposit(1, 129); + pub const NftsAttributeDepositBase: Balance = deposit(1, 0); + pub const NftsDepositPerByte: Balance = deposit(0, 1); +} +}
Calculations viewed bellow were calculated by using the following repository rfc-pricing. +Polkadot +| Name | Current price implementation | Proposed Modified by using the new deposit function | +|---------------------------|----------------------------------|-------------------------| +| collectionDeposit | 10 DOT | 0.20064 DOT | +| itemDeposit | 0.01 DOT | 0.20081 DOT | +| metadataDepositBase | 0.20129 DOT | 0.20076 DOT | +| attributeDepositBase | 0.2 DOT | 0.2 DOT |
+Similarly the prices for Kusama ecosystem were calculated as: +Kusama: +| Name | Current price implementation | Proposed Price in KSM | +|---------------------------|----------------------------------|---------------------------| +| collectionDeposit | 0.1 KSM | 0.006688 KSM | +| itemDeposit | 0.001 KSM | 0.000167 KSM | +| metadataDepositBase | 0.006709666617 KSM | 0.0006709666617 KSM | +| attributeDepositBase | 0.00666666666 KSM | 0.000666666666 KSM |
+In an effort to further lower barriers to entry and foster greater inclusivity, we propose additional modifications to the pricing structure. These proposed reductions are based on a collaborative and calculated approach, involving the consensus of leading NFT creators within the Polkadot and Kusama Asset Hub communities. The adjustments to deposit amounts are not made arbitrarily. Instead, they are the result of detailed discussions and analyses conducted with prominent NFT creators.
+Proposed Code Adjustments
-#![allow(unused)] fn main() { parameter_types! { @@ -259,74 +312,82 @@
Explanation
pub const NftsDepositPerByte: Balance = deposit(0, 1); } }
Prices and Proposed Prices on Polkadot Asset Hub: -Scroll right
-| **Name** | **Current price implementation** | **Price if DOT = 5$** | **Price if DOT goes to 50$** | **Proposed Price in DOT** | **Proposed Price if DOT = 5$** | **Proposed Price if DOT goes to 50$**|
-|---------------------------|----------------------------------|------------------------|-------------------------------|---------------------------|----------------------------------|--------------------------------------|
-| collectionDeposit | 10 DOT | 50 $ | 500 $ | 0.20064 DOT | ~1 $ | 10.32$ |
-| itemDeposit | 0.01 DOT | 0.05 $ | 0.5 $ | 0.005 DOT | 0.025 $ | 0.251$ |
-| metadataDepositBase | 0.20129 DOT | 1.00645 $ | 10.0645 $ | 0.0020129 DOT | 0.0100645 $ | 0.100645$ |
-| attributeDepositBase | 0.2 DOT | 1 $ | 10 $ | 0.002 DOT | 0.01 $ | 0.1$ |
-
-Prices and Proposed Prices on Kusama Asset Hub: -Scroll right
-| **Name** | **Current price implementation** | **Price if KSM = 23$** | **Price if KSM goes to 500$** | **Proposed Price in KSM** | **Proposed Price if KSM = 23$** | **Proposed Price if KSM goes to 500$** |
-|---------------------------|----------------------------------|------------------------|-------------------------------|---------------------------|----------------------------------|----------------------------------------|
-| collectionDeposit | 0.1 KSM | 2.3 $ | 50 $ | 0.006688 KSM | 0.154 $ | 3.34 $ |
-| itemDeposit | 0.001 KSM | 0.023 $ | 0.5 $ | 0.000167 KSM | 0.00385 $ | 0.0835 $ |
-| metadataDepositBase | 0.006709666617 KSM | 0.15432183319 $ | 3.3548333085 $ | 0.0006709666617 KSM | 0.015432183319 $ | 0.33548333085 $ |
-| attributeDepositBase | 0.00666666666 KSM | 0.15333333318 $ | 3.333333333 $ | 0.000666666666 KSM | 0.015333333318 $ | 0.3333333333 $ |
-
-
--+Note: This is only a proposal for change and can be modified upon additional conversation.
-
Prices and Proposed Prices on Polkadot Asset Hub:
+Polkadot +| Name | Current price implementation | Proposed Prices | +|---------------------------|----------------------------------|---------------------| +| collectionDeposit | 10 DOT | 0.20064 DOT | +| itemDeposit | 0.01 DOT | 0.005 DOT | +| metadataDepositBase | 0.20129 DOT | 0.002 DOT | +| attributeDepositBase | 0.2 DOT | 0.002 DOT |
+Kusama +| Name | Current price implementation | Proposed Price in KSM | +|---------------------------|----------------------------------|---------------------------| +| collectionDeposit | 0.1 KSM | 0.006688 KSM | +| itemDeposit | 0.001 KSM | 0.000167 KSM | +| metadataDepositBase | 0.006709666617 KSM | 0.0006709666617 KSM | +| attributeDepositBase | 0.00666666666 KSM | 0.000666666666 KSM |
+Several innovative proposals have been considered to enhance the network's adaptability and manage deposit requirements more effectively:
+The concept of a weak governance origin, controlled by a consortium like the System Collective, has been proposed. This model would allow for dynamic adjustments of NFT deposit requirements in response to market conditions, adhering to storage deposit norms.
+Enhancements and Concerns:
+Another proposal is to use a mathematical function to regulate deposit prices, initially allowing low prices to encourage participation, followed by a gradual increase to prevent network bloat.
+Refinements:
+This approach suggests pegging the deposit value to a stable currency like the USD, introducing predictability and stability for network users.
+Considerations and Challenges:
+Each of these proposals offers unique advantages and challenges. The optimal approach may involve a combination of these ideas, carefully adjusted to address the specific needs and dynamics of the Polkadot and Kusama networks.
Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. Highlighted below are cogent points extracted from the discourse on the Polkadot Forum conversation, which provide critical perspectives on the implications of such changes:
---But NFT deposits were chosen somewhat arbitrarily at genesis and it’s a good exercise to re-evaluate them and adapt if they are causing pain and if lowering them has little or no negative side effect (or if the trade-off is worth it). --> joepetrowski
-
--Underestimates mean that state grows faster, although not unbounded - effectively an economic subsidy on activity. Overestimates mean that the state grows slower - effectively an economic depressant on activity. --> rphmeier
-
-+Technical: We want to prevent state bloat, therefore using state should have a cost associated with it. --> joepetrowski
-
The discourse around modifying deposit requirements includes various perspectives:
+Adjusting NFT deposit requirements on Polkadot and Kusama Asset Hubs involves key challenges:
+State Growth and Technical Concerns: Lowering deposit requirements can lead to increased blockchain state size, potentially causing state bloat. This growth needs to be managed to prevent strain on the network's resources and maintain operational efficiency.
+Network Security and Market Response: Reduced deposits might increase transaction volume, potentially bloating the state, thereby impacting network security. Additionally, adapting to the cryptocurrency market's volatility is crucial. The mechanism for setting deposit amounts must be responsive yet stable, avoiding undue complexity for users.
+Economic Impact on Previous Stakeholders: The change could have varied economic effects on previous (before the change) creators, platform operators, and investors. Balancing these interests is essential to ensure the adjustment benefits the ecosystem without negatively impacting its value dynamics. However in the particular case of Polkadot and Kusama Asset Hub this does not pose a concern since there are very few collections currently and thus previous stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42 collections on Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.
+The change is backwards compatible. The prevention of "spam" could be prevented by OpenGov proposal to forceDestoy
list of collections that are not suitable.
The prevention of "spam" could be prevented by OpenGov proposal to forceDestoy
list of collections that are not suitable.
This change is not expected to have a significant impact on the overall performance of the Polkadot Asset Hub. However, monitoring the network closely, especially in the initial stages after implementation, is crucial to identify and mitigate any potential issues.
-Additionally, a supplementary proposal aims to augment the network's adaptability:
---Just from a technical perspective; I think the best we can do is to use a weak governance origin that is controlled by some consortium (ie. System Collective). -This origin could then update the NFT deposits any time the market conditions warrant it - obviously while honoring the storage deposit requirements. -To implement this, we need RFC#12 and the Parameters pallet from @xlc. --> OliverTY
-
This dynamic governance approach would facilitate a responsive and agile economic model for deposit management, ensuring that the network remains accessible and robust in the face of market volatility.
+The primary performance consideration stems from the potential for state bloat due to increased activity from lower deposit requirements. It's vital to monitor and manage this to avoid any negative impact on the chain's performance. Strategies for mitigating state bloat, including efficient data management and periodic reviews of storage requirements, will be essential.
The proposed change aims to enhance the user experience for artists, making Polkadot more accessible and user-friendly.
+The proposed change aims to enhance the user experience for artists, traders and utilizers of Kusama and Polkadot asset hub. Making Polkadot and Kusama more accessible and user-friendly.
The change does not impact compatibility as redeposit
function is already implemented.
There remain unresolved questions regarding the implementation of a function-based pricing model for deposits and the feasibility of linking deposits to a USD(x) value. These aspects require further exploration and discussion to ascertain their viability and potential impact on the ecosystem.
If accepted, this RFC could pave the way for further discussions and proposals aimed at enhancing the inclusivity and accessibility of the Polkadot ecosystem. Future work could also explore having a weak governance origin for deposits as proposed by Oliver.
+We recommend initially lowering the deposit to the suggested levels. Subsequently, based on the outcomes and feedback, we can continue discussions on more complex models such as function-based pricing or currency-linked deposits.
+If accepted, this RFC could pave the way for further discussions and proposals aimed at enhancing the inclusivity and accessibility of the Polkadot ecosystem.
diff --git a/searchindex.js b/searchindex.js index 1b93e746e..812a69d5b 100644 --- a/searchindex.js +++ b/searchindex.js @@ -1 +1 @@ -Object.assign(window.search, {"doc_urls":["introduction.html#introduction","approved/0001-agile-coretime.html#rfc-1-agile-coretime","approved/0001-agile-coretime.html#summary","approved/0001-agile-coretime.html#motivation","approved/0001-agile-coretime.html#present-system","approved/0001-agile-coretime.html#problems","approved/0001-agile-coretime.html#requirements","approved/0001-agile-coretime.html#stakeholders","approved/0001-agile-coretime.html#explanation","approved/0001-agile-coretime.html#overview","approved/0001-agile-coretime.html#detail","approved/0001-agile-coretime.html#specific-functions-of-the-coretime-chain","approved/0001-agile-coretime.html#notes-on-the-instantaneous-coretime-market","approved/0001-agile-coretime.html#notes-on-economics","approved/0001-agile-coretime.html#notes-on-types","approved/0001-agile-coretime.html#rollout","approved/0001-agile-coretime.html#performance-ergonomics-and-compatibility","approved/0001-agile-coretime.html#testing-security-and-privacy","approved/0001-agile-coretime.html#future-directions-and-related-material","approved/0001-agile-coretime.html#drawbacks-alternatives-and-unknowns","approved/0001-agile-coretime.html#prior-art-and-references","approved/0005-coretime-interface.html#rfc-5-coretime-interface","approved/0005-coretime-interface.html#summary","approved/0005-coretime-interface.html#motivation","approved/0005-coretime-interface.html#requirements","approved/0005-coretime-interface.html#stakeholders","approved/0005-coretime-interface.html#explanation","approved/0005-coretime-interface.html#ump-message-types","approved/0005-coretime-interface.html#dmp-message-types","approved/0005-coretime-interface.html#realistic-limits-of-the-usage","approved/0005-coretime-interface.html#performance-ergonomics-and-compatibility","approved/0005-coretime-interface.html#testing-security-and-privacy","approved/0005-coretime-interface.html#future-directions-and-related-material","approved/0005-coretime-interface.html#drawbacks-alternatives-and-unknowns","approved/0005-coretime-interface.html#prior-art-and-references","approved/0007-system-collator-selection.html#rfc-0007-system-collator-selection","approved/0007-system-collator-selection.html#summary","approved/0007-system-collator-selection.html#motivation","approved/0007-system-collator-selection.html#requirements","approved/0007-system-collator-selection.html#stakeholders","approved/0007-system-collator-selection.html#explanation","approved/0007-system-collator-selection.html#set-size","approved/0007-system-collator-selection.html#drawbacks","approved/0007-system-collator-selection.html#testing-security-and-privacy","approved/0007-system-collator-selection.html#performance-ergonomics-and-compatibility","approved/0007-system-collator-selection.html#performance","approved/0007-system-collator-selection.html#ergonomics","approved/0007-system-collator-selection.html#compatibility","approved/0007-system-collator-selection.html#prior-art-and-references","approved/0007-system-collator-selection.html#written-discussions","approved/0007-system-collator-selection.html#prior-feedback-and-input-from","approved/0007-system-collator-selection.html#unresolved-questions","approved/0007-system-collator-selection.html#future-directions-and-related-material","approved/0008-parachain-bootnodes-dht.html#rfc-0008-store-parachain-bootnodes-in-relay-chain-dht","approved/0008-parachain-bootnodes-dht.html#summary","approved/0008-parachain-bootnodes-dht.html#motivation","approved/0008-parachain-bootnodes-dht.html#stakeholders","approved/0008-parachain-bootnodes-dht.html#explanation","approved/0008-parachain-bootnodes-dht.html#dht-provider-registration","approved/0008-parachain-bootnodes-dht.html#new-networking-protocol","approved/0008-parachain-bootnodes-dht.html#drawbacks","approved/0008-parachain-bootnodes-dht.html#testing-security-and-privacy","approved/0008-parachain-bootnodes-dht.html#performance-ergonomics-and-compatibility","approved/0008-parachain-bootnodes-dht.html#performance","approved/0008-parachain-bootnodes-dht.html#ergonomics","approved/0008-parachain-bootnodes-dht.html#compatibility","approved/0008-parachain-bootnodes-dht.html#prior-art-and-references","approved/0008-parachain-bootnodes-dht.html#unresolved-questions","approved/0008-parachain-bootnodes-dht.html#future-directions-and-related-material","approved/0012-process-for-adding-new-collectives.html#rfc-0012-process-for-adding-new-system-collectives","approved/0012-process-for-adding-new-collectives.html#summary","approved/0012-process-for-adding-new-collectives.html#motivation","approved/0012-process-for-adding-new-collectives.html#stakeholders","approved/0012-process-for-adding-new-collectives.html#explanation","approved/0012-process-for-adding-new-collectives.html#removing-collectives","approved/0012-process-for-adding-new-collectives.html#drawbacks","approved/0012-process-for-adding-new-collectives.html#testing-security-and-privacy","approved/0012-process-for-adding-new-collectives.html#performance-ergonomics-and-compatibility","approved/0012-process-for-adding-new-collectives.html#prior-art-and-references","approved/0012-process-for-adding-new-collectives.html#unresolved-questions","approved/0014-improve-locking-mechanism-for-parachains.html#rfc-0014-improve-locking-mechanism-for-parachains","approved/0014-improve-locking-mechanism-for-parachains.html#summary","approved/0014-improve-locking-mechanism-for-parachains.html#motivation","approved/0014-improve-locking-mechanism-for-parachains.html#requirements","approved/0014-improve-locking-mechanism-for-parachains.html#stakeholders","approved/0014-improve-locking-mechanism-for-parachains.html#explanation","approved/0014-improve-locking-mechanism-for-parachains.html#status-quo","approved/0014-improve-locking-mechanism-for-parachains.html#proposed-changes","approved/0014-improve-locking-mechanism-for-parachains.html#migration","approved/0014-improve-locking-mechanism-for-parachains.html#drawbacks","approved/0014-improve-locking-mechanism-for-parachains.html#testing-security-and-privacy","approved/0014-improve-locking-mechanism-for-parachains.html#performance","approved/0014-improve-locking-mechanism-for-parachains.html#ergonomics","approved/0014-improve-locking-mechanism-for-parachains.html#compatibility","approved/0014-improve-locking-mechanism-for-parachains.html#prior-art-and-references","approved/0014-improve-locking-mechanism-for-parachains.html#unresolved-questions","approved/0014-improve-locking-mechanism-for-parachains.html#future-directions-and-related-material","approved/0022-adopt-encointer-runtime.html#rfc-0022-adopt-encointer-runtime","approved/0022-adopt-encointer-runtime.html#summary","approved/0022-adopt-encointer-runtime.html#motivation","approved/0022-adopt-encointer-runtime.html#stakeholders","approved/0022-adopt-encointer-runtime.html#explanation","approved/0022-adopt-encointer-runtime.html#drawbacks","approved/0022-adopt-encointer-runtime.html#testing-security-and-privacy","approved/0022-adopt-encointer-runtime.html#performance-ergonomics-and-compatibility","approved/0022-adopt-encointer-runtime.html#prior-art-and-references","approved/0022-adopt-encointer-runtime.html#unresolved-questions","approved/0022-adopt-encointer-runtime.html#future-directions-and-related-material","approved/0032-minimal-relay.html#rfc-0032-minimal-relay","approved/0032-minimal-relay.html#summary","approved/0032-minimal-relay.html#motivation","approved/0032-minimal-relay.html#stakeholders","approved/0032-minimal-relay.html#explanation","approved/0032-minimal-relay.html#migrations","approved/0032-minimal-relay.html#interfaces","approved/0032-minimal-relay.html#functional-architecture","approved/0032-minimal-relay.html#resource-allocation","approved/0032-minimal-relay.html#deployment","approved/0032-minimal-relay.html#kusama","approved/0032-minimal-relay.html#drawbacks","approved/0032-minimal-relay.html#testing-security-and-privacy","approved/0032-minimal-relay.html#performance-ergonomics-and-compatibility","approved/0032-minimal-relay.html#performance","approved/0032-minimal-relay.html#ergonomics","approved/0032-minimal-relay.html#compatibility","approved/0032-minimal-relay.html#prior-art-and-references","approved/0032-minimal-relay.html#unresolved-questions","approved/0032-minimal-relay.html#future-directions-and-related-material","approved/0050-fellowship-salaries.html#rfc-0050-fellowship-salaries","approved/0050-fellowship-salaries.html#summary","approved/0050-fellowship-salaries.html#motivation","approved/0050-fellowship-salaries.html#stakeholders","approved/0050-fellowship-salaries.html#explanation","approved/0050-fellowship-salaries.html#salary-asset","approved/0050-fellowship-salaries.html#projections","approved/0050-fellowship-salaries.html#updates","approved/0050-fellowship-salaries.html#drawbacks","approved/0050-fellowship-salaries.html#testing-security-and-privacy","approved/0050-fellowship-salaries.html#performance-ergonomics-and-compatibility","approved/0050-fellowship-salaries.html#performance","approved/0050-fellowship-salaries.html#ergonomics","approved/0050-fellowship-salaries.html#compatibility","approved/0050-fellowship-salaries.html#prior-art-and-references","approved/0050-fellowship-salaries.html#unresolved-questions","approved/0056-one-transaction-per-notification.html#rfc-0056-enforce-only-one-transaction-per-notification","approved/0056-one-transaction-per-notification.html#summary","approved/0056-one-transaction-per-notification.html#motivation","approved/0056-one-transaction-per-notification.html#stakeholders","approved/0056-one-transaction-per-notification.html#explanation","approved/0056-one-transaction-per-notification.html#drawbacks","approved/0056-one-transaction-per-notification.html#testing-security-and-privacy","approved/0056-one-transaction-per-notification.html#performance-ergonomics-and-compatibility","approved/0056-one-transaction-per-notification.html#performance","approved/0056-one-transaction-per-notification.html#ergonomics","approved/0056-one-transaction-per-notification.html#compatibility","approved/0056-one-transaction-per-notification.html#prior-art-and-references","approved/0056-one-transaction-per-notification.html#unresolved-questions","approved/0056-one-transaction-per-notification.html#future-directions-and-related-material","proposed/0004-remove-unnecessary-allocator-usage.html#rfc-0004-remove-the-host-side-runtime-memory-allocator","proposed/0004-remove-unnecessary-allocator-usage.html#summary","proposed/0004-remove-unnecessary-allocator-usage.html#motivation","proposed/0004-remove-unnecessary-allocator-usage.html#stakeholders","proposed/0004-remove-unnecessary-allocator-usage.html#explanation","proposed/0004-remove-unnecessary-allocator-usage.html#new-host-functions","proposed/0004-remove-unnecessary-allocator-usage.html#other-changes","proposed/0004-remove-unnecessary-allocator-usage.html#drawbacks","proposed/0004-remove-unnecessary-allocator-usage.html#prior-art","proposed/0004-remove-unnecessary-allocator-usage.html#unresolved-questions","proposed/0004-remove-unnecessary-allocator-usage.html#future-possibilities","proposed/000x-assethub.html#rfc-0000-lowering-nft-deposits-on-polkadot-and-kusama-asset-hubs","proposed/000x-assethub.html#summary","proposed/000x-assethub.html#motivation","proposed/000x-assethub.html#requirements","proposed/000x-assethub.html#stakeholders","proposed/000x-assethub.html#explanation","proposed/000x-assethub.html#drawbacks","proposed/000x-assethub.html#testing-security-and-privacy","proposed/000x-assethub.html#performance-ergonomics-and-compatibility","proposed/000x-assethub.html#performance","proposed/000x-assethub.html#ergonomics","proposed/000x-assethub.html#compatibility","proposed/000x-assethub.html#unresolved-questions","proposed/000x-assethub.html#future-directions-and-related-material","proposed/0026-sassafras-consensus.html#rfc-0026-sassafras-consensus-protocol","proposed/0026-sassafras-consensus.html#abstract","proposed/0026-sassafras-consensus.html#1-motivation","proposed/0026-sassafras-consensus.html#11-relevance-to-implementors","proposed/0026-sassafras-consensus.html#12-supporting-sassafras-for-polkadot","proposed/0026-sassafras-consensus.html#2-stakeholders","proposed/0026-sassafras-consensus.html#21-blockchain-developers","proposed/0026-sassafras-consensus.html#22-polkadot-ecosystem-contributors","proposed/0026-sassafras-consensus.html#3-notation-and-convention","proposed/0026-sassafras-consensus.html#31-data-structures-definitions-and-encoding","proposed/0026-sassafras-consensus.html#32-pseudo-code","proposed/0026-sassafras-consensus.html#33-incremental-introduction-of-types-and-functions","proposed/0026-sassafras-consensus.html#4-protocol-introduction","proposed/0026-sassafras-consensus.html#41-submission-of-candidate-tickets","proposed/0026-sassafras-consensus.html#42-validation-of-candidate-tickets","proposed/0026-sassafras-consensus.html#43-tickets-and-slots-binding","proposed/0026-sassafras-consensus.html#44-claim-of-ticket-ownership","proposed/0026-sassafras-consensus.html#45-validation-of-ticket-ownership","proposed/0026-sassafras-consensus.html#5-bandersnatch-vrfs-cryptographic-primitives","proposed/0026-sassafras-consensus.html#51-vrf-input","proposed/0026-sassafras-consensus.html#52-vrf-preoutput","proposed/0026-sassafras-consensus.html#53-vrf-signature-data","proposed/0026-sassafras-consensus.html#54-vrf-signature","proposed/0026-sassafras-consensus.html#6-sassafras-protocol","proposed/0026-sassafras-consensus.html#61-epochs-first-block","proposed/0026-sassafras-consensus.html#62-creation-and-submission-of-candidate-tickets","proposed/0026-sassafras-consensus.html#63-validation-of-candidate-tickets","proposed/0026-sassafras-consensus.html#64-ticket-slot-binding","proposed/0026-sassafras-consensus.html#65-slot-claim-production","proposed/0026-sassafras-consensus.html#66-slot-claim-verification","proposed/0026-sassafras-consensus.html#661-primary-method","proposed/0026-sassafras-consensus.html#67-randomness-accumulator","proposed/0026-sassafras-consensus.html#7-drawbacks","proposed/0026-sassafras-consensus.html#8-testing-security-and-privacy","proposed/0026-sassafras-consensus.html#9-performance-ergonomics-and-compatibility","proposed/0026-sassafras-consensus.html#91-performance","proposed/0026-sassafras-consensus.html#92-ergonomics","proposed/0026-sassafras-consensus.html#93-compatibility","proposed/0026-sassafras-consensus.html#10-prior-art-and-references","proposed/0026-sassafras-consensus.html#11-unresolved-questions","proposed/0026-sassafras-consensus.html#12-future-directions-and-related-material","proposed/0026-sassafras-consensus.html#121-interactions-with-on-chain-code","proposed/0026-sassafras-consensus.html#122-deployment-strategies","proposed/0026-sassafras-consensus.html#123-zk-snark-srs-initialization","proposed/0026-sassafras-consensus.html#124-anonymous-submission-of-tickets","proposed/0034-xcm-absolute-location-account-derivation.html#rfc-34-xcm-absolute-location-account-derivation","proposed/0034-xcm-absolute-location-account-derivation.html#summary","proposed/0034-xcm-absolute-location-account-derivation.html#motivation","proposed/0034-xcm-absolute-location-account-derivation.html#stakeholders","proposed/0034-xcm-absolute-location-account-derivation.html#explanation","proposed/0034-xcm-absolute-location-account-derivation.html#drawbacks","proposed/0034-xcm-absolute-location-account-derivation.html#testing-security-and-privacy","proposed/0034-xcm-absolute-location-account-derivation.html#performance-ergonomics-and-compatibility","proposed/0034-xcm-absolute-location-account-derivation.html#performance","proposed/0034-xcm-absolute-location-account-derivation.html#ergonomics","proposed/0034-xcm-absolute-location-account-derivation.html#compatibility","proposed/0034-xcm-absolute-location-account-derivation.html#prior-art-and-references","proposed/0034-xcm-absolute-location-account-derivation.html#unresolved-questions","proposed/0042-extrinsics-state-version.html#rfc-0042-add-system-version-that-replaces-stateversion-on-runtimeversion","proposed/0042-extrinsics-state-version.html#summary","proposed/0042-extrinsics-state-version.html#motivation","proposed/0042-extrinsics-state-version.html#stakeholders","proposed/0042-extrinsics-state-version.html#explanation","proposed/0042-extrinsics-state-version.html#drawbacks","proposed/0042-extrinsics-state-version.html#testing-security-and-privacy","proposed/0042-extrinsics-state-version.html#performance-ergonomics-and-compatibility","proposed/0042-extrinsics-state-version.html#performance","proposed/0042-extrinsics-state-version.html#ergonomics","proposed/0042-extrinsics-state-version.html#compatibility","proposed/0042-extrinsics-state-version.html#prior-art-and-references","proposed/0042-extrinsics-state-version.html#unresolved-questions","proposed/0042-extrinsics-state-version.html#future-directions-and-related-material","proposed/0044-rent-based-registration.html#rfc-0044-rent-based-registration-model","proposed/0044-rent-based-registration.html#summary","proposed/0044-rent-based-registration.html#motivation","proposed/0044-rent-based-registration.html#requirements","proposed/0044-rent-based-registration.html#stakeholders","proposed/0044-rent-based-registration.html#explanation","proposed/0044-rent-based-registration.html#registering-an-on-demand-parachain","proposed/0044-rent-based-registration.html#on-demand-parachain-pruning","proposed/0044-rent-based-registration.html#ensuring-rent-is-paid","proposed/0044-rent-based-registration.html#on-demand-para-re-registration","proposed/0044-rent-based-registration.html#drawbacks","proposed/0044-rent-based-registration.html#testing-security-and-privacy","proposed/0044-rent-based-registration.html#performance-ergonomics-and-compatibility","proposed/0044-rent-based-registration.html#performance","proposed/0044-rent-based-registration.html#ergonomics","proposed/0044-rent-based-registration.html#compatibility","proposed/0044-rent-based-registration.html#prior-art-and-references","proposed/0044-rent-based-registration.html#unresolved-questions","proposed/0044-rent-based-registration.html#future-directions-and-related-material","proposed/0046-metadata-for-offline-signers.html#rfc-0000-metadata-for-offline-signers","proposed/0046-metadata-for-offline-signers.html#summary","proposed/0046-metadata-for-offline-signers.html#motivation","proposed/0046-metadata-for-offline-signers.html#background","proposed/0046-metadata-for-offline-signers.html#solution-requirements","proposed/0046-metadata-for-offline-signers.html#stakeholders","proposed/0046-metadata-for-offline-signers.html#explanation","proposed/0046-metadata-for-offline-signers.html#definitions","proposed/0046-metadata-for-offline-signers.html#general-flow","proposed/0046-metadata-for-offline-signers.html#metadata-modularization","proposed/0046-metadata-for-offline-signers.html#merging-protocol","proposed/0046-metadata-for-offline-signers.html#complete-binary-merkle-tree-construction-protocol","proposed/0046-metadata-for-offline-signers.html#digest","proposed/0046-metadata-for-offline-signers.html#shortening","proposed/0046-metadata-for-offline-signers.html#transmission","proposed/0046-metadata-for-offline-signers.html#offline-verification","proposed/0046-metadata-for-offline-signers.html#chain-verification","proposed/0046-metadata-for-offline-signers.html#drawbacks","proposed/0046-metadata-for-offline-signers.html#increased-transaction-size","proposed/0046-metadata-for-offline-signers.html#transition-overhead","proposed/0046-metadata-for-offline-signers.html#testing-security-and-privacy","proposed/0046-metadata-for-offline-signers.html#performance-ergonomics-and-compatibility","proposed/0046-metadata-for-offline-signers.html#performance","proposed/0046-metadata-for-offline-signers.html#ergonomics","proposed/0046-metadata-for-offline-signers.html#compatibility","proposed/0046-metadata-for-offline-signers.html#prior-art-and-references","proposed/0046-metadata-for-offline-signers.html#unresolved-questions","proposed/0046-metadata-for-offline-signers.html#future-directions-and-related-material","proposed/0047-assignment-of-availability-chunks.html#rfc-0047-assignment-of-availability-chunks-to-validators","proposed/0047-assignment-of-availability-chunks.html#summary","proposed/0047-assignment-of-availability-chunks.html#motivation","proposed/0047-assignment-of-availability-chunks.html#stakeholders","proposed/0047-assignment-of-availability-chunks.html#explanation","proposed/0047-assignment-of-availability-chunks.html#systematic-erasure-codes","proposed/0047-assignment-of-availability-chunks.html#availability-recovery-at-present","proposed/0047-assignment-of-availability-chunks.html#availability-recovery-from-systematic-chunks","proposed/0047-assignment-of-availability-chunks.html#chunk-assignment-function","proposed/0047-assignment-of-availability-chunks.html#network-protocol","proposed/0047-assignment-of-availability-chunks.html#upgrade-path","proposed/0047-assignment-of-availability-chunks.html#drawbacks","proposed/0047-assignment-of-availability-chunks.html#testing-security-and-privacy","proposed/0047-assignment-of-availability-chunks.html#performance-ergonomics-and-compatibility","proposed/0047-assignment-of-availability-chunks.html#performance","proposed/0047-assignment-of-availability-chunks.html#ergonomics","proposed/0047-assignment-of-availability-chunks.html#compatibility","proposed/0047-assignment-of-availability-chunks.html#prior-art-and-references","proposed/0047-assignment-of-availability-chunks.html#unresolved-questions","proposed/0047-assignment-of-availability-chunks.html#future-directions-and-related-material","proposed/0047-assignment-of-availability-chunks.html#appendix-a","proposed/0059-nodes-capabilities-discovery.html#rfc-0059-add-a-discovery-mechanism-for-nodes-based-on-their-capabilities","proposed/0059-nodes-capabilities-discovery.html#summary","proposed/0059-nodes-capabilities-discovery.html#motivation","proposed/0059-nodes-capabilities-discovery.html#stakeholders","proposed/0059-nodes-capabilities-discovery.html#explanation","proposed/0059-nodes-capabilities-discovery.html#capabilities","proposed/0059-nodes-capabilities-discovery.html#dht-provider-registration","proposed/0059-nodes-capabilities-discovery.html#secondary-dhts","proposed/0059-nodes-capabilities-discovery.html#head-of-the-chain-providers","proposed/0059-nodes-capabilities-discovery.html#drawbacks","proposed/0059-nodes-capabilities-discovery.html#testing-security-and-privacy","proposed/0059-nodes-capabilities-discovery.html#performance-ergonomics-and-compatibility","proposed/0059-nodes-capabilities-discovery.html#performance","proposed/0059-nodes-capabilities-discovery.html#ergonomics","proposed/0059-nodes-capabilities-discovery.html#compatibility","proposed/0059-nodes-capabilities-discovery.html#prior-art-and-references","proposed/0059-nodes-capabilities-discovery.html#unresolved-questions","proposed/0059-nodes-capabilities-discovery.html#future-directions-and-related-material","proposed/0061-allocator-inside-of-runtime.html#rfc-0061-support-allocator-inside-of-runtime","proposed/0061-allocator-inside-of-runtime.html#summary","proposed/0061-allocator-inside-of-runtime.html#motivation","proposed/0061-allocator-inside-of-runtime.html#stakeholders","proposed/0061-allocator-inside-of-runtime.html#explanation","proposed/0061-allocator-inside-of-runtime.html#runtime-side-spec","proposed/0061-allocator-inside-of-runtime.html#client-side-spec","proposed/0061-allocator-inside-of-runtime.html#drawbacks","proposed/0061-allocator-inside-of-runtime.html#testing-security-and-privacy","proposed/0061-allocator-inside-of-runtime.html#performance-ergonomics-and-compatibility","proposed/0061-allocator-inside-of-runtime.html#performance","proposed/0061-allocator-inside-of-runtime.html#ergonomics","proposed/0061-allocator-inside-of-runtime.html#compatibility","proposed/0061-allocator-inside-of-runtime.html#prior-art-and-references","proposed/0061-allocator-inside-of-runtime.html#unresolved-questions","proposed/0061-allocator-inside-of-runtime.html#future-directions-and-related-material","proposed/0062-lowering-existential-deposit-on-assethub.html#rfc-0062-lowering-existential-deposit-on--asset-hub-for-polkadot","proposed/0062-lowering-existential-deposit-on-assethub.html#summary","proposed/0062-lowering-existential-deposit-on-assethub.html#motivation","proposed/0062-lowering-existential-deposit-on-assethub.html#stakeholders","proposed/0062-lowering-existential-deposit-on-assethub.html#explanation","proposed/0062-lowering-existential-deposit-on-assethub.html#drawbacks","proposed/0062-lowering-existential-deposit-on-assethub.html#testing-security-and-privacy","proposed/0062-lowering-existential-deposit-on-assethub.html#performance-ergonomics-and-compatibility","proposed/0062-lowering-existential-deposit-on-assethub.html#performance","proposed/0062-lowering-existential-deposit-on-assethub.html#ergonomics","proposed/0062-lowering-existential-deposit-on-assethub.html#compatibility","proposed/0062-lowering-existential-deposit-on-assethub.html#unresolved-questions","proposed/0062-lowering-existential-deposit-on-assethub.html#future-directions-and-related-material","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#rfc-0006-dynamic-pricing-for-bulk-coretime-sales","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#summary","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#motivation","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#requirements","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#stakeholders","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#explanation","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#overview","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#parameters","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#function","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#pseudo-code","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#properties-of-the-curve","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#example-configurations","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#drawbacks","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#prior-art-and-references","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#future-possibilities","stale/0006-dynamic-pricing-for-bulk-coretime-sales.html#references","stale/0009-improved-net-light-client-requests.html#rfc-0009-improved-light-client-requests-networking-protocol","stale/0009-improved-net-light-client-requests.html#summary","stale/0009-improved-net-light-client-requests.html#motivation","stale/0009-improved-net-light-client-requests.html#stakeholders","stale/0009-improved-net-light-client-requests.html#explanation","stale/0009-improved-net-light-client-requests.html#drawbacks","stale/0009-improved-net-light-client-requests.html#testing-security-and-privacy","stale/0009-improved-net-light-client-requests.html#performance-ergonomics-and-compatibility","stale/0009-improved-net-light-client-requests.html#performance","stale/0009-improved-net-light-client-requests.html#ergonomics","stale/0009-improved-net-light-client-requests.html#compatibility","stale/0009-improved-net-light-client-requests.html#prior-art-and-references","stale/0009-improved-net-light-client-requests.html#unresolved-questions","stale/0009-improved-net-light-client-requests.html#future-directions-and-related-material","stale/0010-burn-coretime-revenue.html#rfc-0010-burn-coretime-revenue","stale/0010-burn-coretime-revenue.html#summary","stale/0010-burn-coretime-revenue.html#motivation","stale/0010-burn-coretime-revenue.html#stakeholders","stale/0010-burn-coretime-revenue.html#explanation","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#rfc-0011-add-new-path-to-account-creation-on-asset-hubs","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#summary","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#motivation","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#requirements","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#stakeholders","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#explanation","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#drawbacks","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#testing-security-and-privacy","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#performance-ergonomics-and-compatibility","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#performance","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#ergonomics","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#compatibility","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#prior-art-and-references","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#unresolved-questions","stale/0011-add-new-path-to-account-creation-on-asset-hubs.html#future-directions-and-related-material","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#rfc-0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#summary","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#motivation","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#stakeholders","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#explanation","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#coreinitialize_block","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#blockbuilderlast_inherent","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#combined","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#drawbacks","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#testing-security-and-privacy","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#performance-ergonomics-and-compatibility","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#performance","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#ergonomics","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#compatibility","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#prior-art-and-references","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#unresolved-questions","stale/0013-prepare-blockbuilder-and-core-runtime-apis-for-mbms.html#future-directions-and-related-material","stale/0015-market-design-revisit.html#rfc-0015-market-design-revisit","stale/0015-market-design-revisit.html#summary","stale/0015-market-design-revisit.html#motivation","stale/0015-market-design-revisit.html#stakeholders","stale/0015-market-design-revisit.html#explanation","stale/0015-market-design-revisit.html#bulk-markets","stale/0015-market-design-revisit.html#benefits-of-this-system","stale/0015-market-design-revisit.html#further-discussion-points","stale/0015-market-design-revisit.html#drawbacks","stale/0015-market-design-revisit.html#prior-art-and-references","stale/0015-market-design-revisit.html#unresolved-questions","stale/0020-treasurer-track-confirmation-period-duration-modification.html#rfc-0020-treasurer-track-confirmation-period-duration-modification","stale/0020-treasurer-track-confirmation-period-duration-modification.html#summary","stale/0020-treasurer-track-confirmation-period-duration-modification.html#motivation","stale/0020-treasurer-track-confirmation-period-duration-modification.html#stakeholders","stale/0020-treasurer-track-confirmation-period-duration-modification.html#explanation","stale/0020-treasurer-track-confirmation-period-duration-modification.html#drawbacks","stale/0020-treasurer-track-confirmation-period-duration-modification.html#testing-security-and-privacy","stale/0020-treasurer-track-confirmation-period-duration-modification.html#performance-ergonomics-and-compatibility","stale/0020-treasurer-track-confirmation-period-duration-modification.html#performance","stale/0020-treasurer-track-confirmation-period-duration-modification.html#ergonomics--compatibility","stale/0020-treasurer-track-confirmation-period-duration-modification.html#prior-art-and-references","stale/0020-treasurer-track-confirmation-period-duration-modification.html#unresolved-questions","stale/0020-treasurer-track-confirmation-period-duration-modification.html#future-directions-and-related-material","stale/0035-conviction-voting-delegation-modifications.html#rfc-0035-conviction-voting-delegation-modifications","stale/0035-conviction-voting-delegation-modifications.html#summary","stale/0035-conviction-voting-delegation-modifications.html#motivation","stale/0035-conviction-voting-delegation-modifications.html#stakeholders","stale/0035-conviction-voting-delegation-modifications.html#explanation","stale/0035-conviction-voting-delegation-modifications.html#drawbacks","stale/0035-conviction-voting-delegation-modifications.html#testing-security-and-privacy","stale/0035-conviction-voting-delegation-modifications.html#performance-ergonomics-and-compatibility","stale/0035-conviction-voting-delegation-modifications.html#performance","stale/0035-conviction-voting-delegation-modifications.html#ergonomics--compatibility","stale/0035-conviction-voting-delegation-modifications.html#prior-art-and-references","stale/0035-conviction-voting-delegation-modifications.html#unresolved-questions","stale/0035-conviction-voting-delegation-modifications.html#future-directions-and-related-material","stale/0043-storage-proof-size-hostfunction.html#rfc-0043-introduce-storage_proof_size-host-function-for-improved-parachain-block-utilization","stale/0043-storage-proof-size-hostfunction.html#summary","stale/0043-storage-proof-size-hostfunction.html#motivation","stale/0043-storage-proof-size-hostfunction.html#stakeholders","stale/0043-storage-proof-size-hostfunction.html#explanation","stale/0043-storage-proof-size-hostfunction.html#performance-ergonomics-and-compatibility","stale/0043-storage-proof-size-hostfunction.html#performance","stale/0043-storage-proof-size-hostfunction.html#ergonomics","stale/0043-storage-proof-size-hostfunction.html#compatibility","stale/0043-storage-proof-size-hostfunction.html#prior-art-and-references","stale/0048-session-keys-runtime-api.html#rfc-0048-generate-ownership-proof-for-sessionkeys","stale/0048-session-keys-runtime-api.html#summary","stale/0048-session-keys-runtime-api.html#motivation","stale/0048-session-keys-runtime-api.html#stakeholders","stale/0048-session-keys-runtime-api.html#explanation","stale/0048-session-keys-runtime-api.html#drawbacks","stale/0048-session-keys-runtime-api.html#testing-security-and-privacy","stale/0048-session-keys-runtime-api.html#performance-ergonomics-and-compatibility","stale/0048-session-keys-runtime-api.html#performance","stale/0048-session-keys-runtime-api.html#ergonomics","stale/0048-session-keys-runtime-api.html#compatibility","stale/0048-session-keys-runtime-api.html#prior-art-and-references","stale/0048-session-keys-runtime-api.html#unresolved-questions","stale/0048-session-keys-runtime-api.html#future-directions-and-related-material","stale/0054-remove-heap-pages.html#rfc-0054-remove-the-concept-of-heap-pages-from-the-client","stale/0054-remove-heap-pages.html#summary","stale/0054-remove-heap-pages.html#motivation","stale/0054-remove-heap-pages.html#stakeholders","stale/0054-remove-heap-pages.html#explanation","stale/0054-remove-heap-pages.html#drawbacks","stale/0054-remove-heap-pages.html#testing-security-and-privacy","stale/0054-remove-heap-pages.html#performance-ergonomics-and-compatibility","stale/0054-remove-heap-pages.html#performance","stale/0054-remove-heap-pages.html#ergonomics","stale/0054-remove-heap-pages.html#compatibility","stale/0054-remove-heap-pages.html#prior-art-and-references","stale/0054-remove-heap-pages.html#unresolved-questions","stale/0054-remove-heap-pages.html#future-directions-and-related-material"],"index":{"documentStore":{"docInfo":{"0":{"body":16,"breadcrumbs":2,"title":1},"1":{"body":65,"breadcrumbs":8,"title":4},"10":{"body":562,"breadcrumbs":5,"title":1},"100":{"body":52,"breadcrumbs":6,"title":1},"101":{"body":73,"breadcrumbs":6,"title":1},"102":{"body":38,"breadcrumbs":6,"title":1},"103":{"body":7,"breadcrumbs":8,"title":3},"104":{"body":1,"breadcrumbs":8,"title":3},"105":{"body":4,"breadcrumbs":8,"title":3},"106":{"body":2,"breadcrumbs":7,"title":2},"107":{"body":4,"breadcrumbs":9,"title":4},"108":{"body":54,"breadcrumbs":8,"title":4},"109":{"body":22,"breadcrumbs":5,"title":1},"11":{"body":630,"breadcrumbs":8,"title":4},"110":{"body":85,"breadcrumbs":5,"title":1},"111":{"body":16,"breadcrumbs":5,"title":1},"112":{"body":42,"breadcrumbs":5,"title":1},"113":{"body":62,"breadcrumbs":5,"title":1},"114":{"body":62,"breadcrumbs":5,"title":1},"115":{"body":127,"breadcrumbs":6,"title":2},"116":{"body":101,"breadcrumbs":6,"title":2},"117":{"body":324,"breadcrumbs":5,"title":1},"118":{"body":117,"breadcrumbs":5,"title":1},"119":{"body":12,"breadcrumbs":5,"title":1},"12":{"body":103,"breadcrumbs":8,"title":4},"120":{"body":13,"breadcrumbs":7,"title":3},"121":{"body":6,"breadcrumbs":7,"title":3},"122":{"body":12,"breadcrumbs":5,"title":1},"123":{"body":35,"breadcrumbs":5,"title":1},"124":{"body":17,"breadcrumbs":5,"title":1},"125":{"body":7,"breadcrumbs":7,"title":3},"126":{"body":14,"breadcrumbs":6,"title":2},"127":{"body":23,"breadcrumbs":8,"title":4},"128":{"body":48,"breadcrumbs":8,"title":4},"129":{"body":16,"breadcrumbs":5,"title":1},"13":{"body":410,"breadcrumbs":6,"title":2},"130":{"body":70,"breadcrumbs":5,"title":1},"131":{"body":4,"breadcrumbs":5,"title":1},"132":{"body":78,"breadcrumbs":5,"title":1},"133":{"body":92,"breadcrumbs":6,"title":2},"134":{"body":67,"breadcrumbs":5,"title":1},"135":{"body":12,"breadcrumbs":5,"title":1},"136":{"body":13,"breadcrumbs":5,"title":1},"137":{"body":1,"breadcrumbs":7,"title":3},"138":{"body":0,"breadcrumbs":7,"title":3},"139":{"body":1,"breadcrumbs":5,"title":1},"14":{"body":905,"breadcrumbs":6,"title":2},"140":{"body":1,"breadcrumbs":5,"title":1},"141":{"body":1,"breadcrumbs":5,"title":1},"142":{"body":12,"breadcrumbs":7,"title":3},"143":{"body":2,"breadcrumbs":6,"title":2},"144":{"body":51,"breadcrumbs":14,"title":7},"145":{"body":44,"breadcrumbs":8,"title":1},"146":{"body":100,"breadcrumbs":8,"title":1},"147":{"body":3,"breadcrumbs":8,"title":1},"148":{"body":121,"breadcrumbs":8,"title":1},"149":{"body":33,"breadcrumbs":8,"title":1},"15":{"body":52,"breadcrumbs":5,"title":1},"150":{"body":1,"breadcrumbs":10,"title":3},"151":{"body":0,"breadcrumbs":10,"title":3},"152":{"body":1,"breadcrumbs":8,"title":1},"153":{"body":1,"breadcrumbs":8,"title":1},"154":{"body":19,"breadcrumbs":8,"title":1},"155":{"body":1,"breadcrumbs":10,"title":3},"156":{"body":1,"breadcrumbs":9,"title":2},"157":{"body":4,"breadcrumbs":11,"title":4},"158":{"body":45,"breadcrumbs":16,"title":8},"159":{"body":10,"breadcrumbs":9,"title":1},"16":{"body":33,"breadcrumbs":7,"title":3},"160":{"body":156,"breadcrumbs":9,"title":1},"161":{"body":4,"breadcrumbs":9,"title":1},"162":{"body":0,"breadcrumbs":9,"title":1},"163":{"body":1723,"breadcrumbs":11,"title":3},"164":{"body":52,"breadcrumbs":9,"title":1},"165":{"body":12,"breadcrumbs":9,"title":1},"166":{"body":10,"breadcrumbs":10,"title":2},"167":{"body":71,"breadcrumbs":10,"title":2},"168":{"body":26,"breadcrumbs":10,"title":2},"169":{"body":61,"breadcrumbs":18,"title":9},"17":{"body":40,"breadcrumbs":7,"title":3},"170":{"body":28,"breadcrumbs":10,"title":1},"171":{"body":55,"breadcrumbs":10,"title":1},"172":{"body":8,"breadcrumbs":10,"title":1},"173":{"body":69,"breadcrumbs":10,"title":1},"174":{"body":294,"breadcrumbs":10,"title":1},"175":{"body":77,"breadcrumbs":10,"title":1},"176":{"body":12,"breadcrumbs":12,"title":3},"177":{"body":0,"breadcrumbs":12,"title":3},"178":{"body":79,"breadcrumbs":10,"title":1},"179":{"body":13,"breadcrumbs":10,"title":1},"18":{"body":35,"breadcrumbs":8,"title":4},"180":{"body":7,"breadcrumbs":10,"title":1},"181":{"body":0,"breadcrumbs":11,"title":2},"182":{"body":23,"breadcrumbs":13,"title":4},"183":{"body":176,"breadcrumbs":10,"title":5},"184":{"body":38,"breadcrumbs":6,"title":1},"185":{"body":34,"breadcrumbs":7,"title":2},"186":{"body":21,"breadcrumbs":8,"title":3},"187":{"body":24,"breadcrumbs":9,"title":4},"188":{"body":0,"breadcrumbs":7,"title":2},"189":{"body":10,"breadcrumbs":8,"title":3},"19":{"body":49,"breadcrumbs":7,"title":3},"190":{"body":18,"breadcrumbs":9,"title":4},"191":{"body":10,"breadcrumbs":8,"title":3},"192":{"body":46,"breadcrumbs":10,"title":5},"193":{"body":109,"breadcrumbs":8,"title":3},"194":{"body":22,"breadcrumbs":10,"title":5},"195":{"body":35,"breadcrumbs":8,"title":3},"196":{"body":16,"breadcrumbs":9,"title":4},"197":{"body":13,"breadcrumbs":9,"title":4},"198":{"body":14,"breadcrumbs":9,"title":4},"199":{"body":16,"breadcrumbs":9,"title":4},"2":{"body":50,"breadcrumbs":5,"title":1},"20":{"body":34,"breadcrumbs":7,"title":3},"200":{"body":7,"breadcrumbs":9,"title":4},"201":{"body":48,"breadcrumbs":10,"title":5},"202":{"body":91,"breadcrumbs":8,"title":3},"203":{"body":104,"breadcrumbs":8,"title":3},"204":{"body":58,"breadcrumbs":9,"title":4},"205":{"body":199,"breadcrumbs":8,"title":3},"206":{"body":0,"breadcrumbs":8,"title":3},"207":{"body":313,"breadcrumbs":9,"title":4},"208":{"body":491,"breadcrumbs":10,"title":5},"209":{"body":41,"breadcrumbs":9,"title":4},"21":{"body":55,"breadcrumbs":8,"title":4},"210":{"body":146,"breadcrumbs":9,"title":4},"211":{"body":332,"breadcrumbs":9,"title":4},"212":{"body":54,"breadcrumbs":9,"title":4},"213":{"body":93,"breadcrumbs":8,"title":3},"214":{"body":75,"breadcrumbs":8,"title":3},"215":{"body":1,"breadcrumbs":7,"title":2},"216":{"body":17,"breadcrumbs":9,"title":4},"217":{"body":0,"breadcrumbs":9,"title":4},"218":{"body":32,"breadcrumbs":7,"title":2},"219":{"body":2,"breadcrumbs":7,"title":2},"22":{"body":43,"breadcrumbs":5,"title":1},"220":{"body":22,"breadcrumbs":7,"title":2},"221":{"body":25,"breadcrumbs":9,"title":4},"222":{"body":1,"breadcrumbs":8,"title":3},"223":{"body":15,"breadcrumbs":10,"title":5},"224":{"body":42,"breadcrumbs":9,"title":4},"225":{"body":19,"breadcrumbs":8,"title":3},"226":{"body":43,"breadcrumbs":10,"title":5},"227":{"body":24,"breadcrumbs":9,"title":4},"228":{"body":45,"breadcrumbs":14,"title":7},"229":{"body":22,"breadcrumbs":8,"title":1},"23":{"body":37,"breadcrumbs":5,"title":1},"230":{"body":37,"breadcrumbs":8,"title":1},"231":{"body":2,"breadcrumbs":8,"title":1},"232":{"body":202,"breadcrumbs":8,"title":1},"233":{"body":3,"breadcrumbs":8,"title":1},"234":{"body":26,"breadcrumbs":10,"title":3},"235":{"body":0,"breadcrumbs":10,"title":3},"236":{"body":8,"breadcrumbs":8,"title":1},"237":{"body":6,"breadcrumbs":8,"title":1},"238":{"body":8,"breadcrumbs":8,"title":1},"239":{"body":10,"breadcrumbs":10,"title":3},"24":{"body":73,"breadcrumbs":5,"title":1},"240":{"body":7,"breadcrumbs":9,"title":2},"241":{"body":49,"breadcrumbs":16,"title":8},"242":{"body":33,"breadcrumbs":9,"title":1},"243":{"body":94,"breadcrumbs":9,"title":1},"244":{"body":6,"breadcrumbs":9,"title":1},"245":{"body":97,"breadcrumbs":9,"title":1},"246":{"body":11,"breadcrumbs":9,"title":1},"247":{"body":4,"breadcrumbs":11,"title":3},"248":{"body":8,"breadcrumbs":11,"title":3},"249":{"body":4,"breadcrumbs":9,"title":1},"25":{"body":21,"breadcrumbs":5,"title":1},"250":{"body":3,"breadcrumbs":9,"title":1},"251":{"body":3,"breadcrumbs":9,"title":1},"252":{"body":12,"breadcrumbs":11,"title":3},"253":{"body":4,"breadcrumbs":10,"title":2},"254":{"body":9,"breadcrumbs":12,"title":4},"255":{"body":61,"breadcrumbs":12,"title":6},"256":{"body":34,"breadcrumbs":7,"title":1},"257":{"body":92,"breadcrumbs":7,"title":1},"258":{"body":67,"breadcrumbs":7,"title":1},"259":{"body":4,"breadcrumbs":7,"title":1},"26":{"body":39,"breadcrumbs":5,"title":1},"260":{"body":77,"breadcrumbs":7,"title":1},"261":{"body":237,"breadcrumbs":9,"title":3},"262":{"body":124,"breadcrumbs":9,"title":3},"263":{"body":41,"breadcrumbs":9,"title":3},"264":{"body":106,"breadcrumbs":10,"title":4},"265":{"body":51,"breadcrumbs":7,"title":1},"266":{"body":27,"breadcrumbs":9,"title":3},"267":{"body":0,"breadcrumbs":9,"title":3},"268":{"body":4,"breadcrumbs":7,"title":1},"269":{"body":13,"breadcrumbs":7,"title":1},"27":{"body":235,"breadcrumbs":7,"title":3},"270":{"body":3,"breadcrumbs":7,"title":1},"271":{"body":5,"breadcrumbs":9,"title":3},"272":{"body":2,"breadcrumbs":8,"title":2},"273":{"body":37,"breadcrumbs":10,"title":4},"274":{"body":78,"breadcrumbs":10,"title":5},"275":{"body":53,"breadcrumbs":6,"title":1},"276":{"body":0,"breadcrumbs":6,"title":1},"277":{"body":170,"breadcrumbs":6,"title":1},"278":{"body":164,"breadcrumbs":7,"title":2},"279":{"body":53,"breadcrumbs":6,"title":1},"28":{"body":86,"breadcrumbs":7,"title":3},"280":{"body":19,"breadcrumbs":6,"title":1},"281":{"body":242,"breadcrumbs":6,"title":1},"282":{"body":32,"breadcrumbs":7,"title":2},"283":{"body":96,"breadcrumbs":7,"title":2},"284":{"body":9,"breadcrumbs":7,"title":2},"285":{"body":82,"breadcrumbs":11,"title":6},"286":{"body":42,"breadcrumbs":6,"title":1},"287":{"body":41,"breadcrumbs":6,"title":1},"288":{"body":44,"breadcrumbs":6,"title":1},"289":{"body":36,"breadcrumbs":7,"title":2},"29":{"body":32,"breadcrumbs":7,"title":3},"290":{"body":80,"breadcrumbs":7,"title":2},"291":{"body":0,"breadcrumbs":6,"title":1},"292":{"body":15,"breadcrumbs":8,"title":3},"293":{"body":31,"breadcrumbs":7,"title":2},"294":{"body":40,"breadcrumbs":8,"title":3},"295":{"body":0,"breadcrumbs":8,"title":3},"296":{"body":23,"breadcrumbs":6,"title":1},"297":{"body":10,"breadcrumbs":6,"title":1},"298":{"body":18,"breadcrumbs":6,"title":1},"299":{"body":14,"breadcrumbs":8,"title":3},"3":{"body":0,"breadcrumbs":5,"title":1},"30":{"body":2,"breadcrumbs":7,"title":3},"300":{"body":12,"breadcrumbs":7,"title":2},"301":{"body":42,"breadcrumbs":9,"title":4},"302":{"body":67,"breadcrumbs":12,"title":6},"303":{"body":20,"breadcrumbs":7,"title":1},"304":{"body":66,"breadcrumbs":7,"title":1},"305":{"body":5,"breadcrumbs":7,"title":1},"306":{"body":0,"breadcrumbs":7,"title":1},"307":{"body":77,"breadcrumbs":9,"title":3},"308":{"body":94,"breadcrumbs":9,"title":3},"309":{"body":82,"breadcrumbs":10,"title":4},"31":{"body":11,"breadcrumbs":7,"title":3},"310":{"body":101,"breadcrumbs":9,"title":3},"311":{"body":261,"breadcrumbs":8,"title":2},"312":{"body":172,"breadcrumbs":8,"title":2},"313":{"body":59,"breadcrumbs":7,"title":1},"314":{"body":11,"breadcrumbs":9,"title":3},"315":{"body":0,"breadcrumbs":9,"title":3},"316":{"body":45,"breadcrumbs":7,"title":1},"317":{"body":1,"breadcrumbs":7,"title":1},"318":{"body":19,"breadcrumbs":7,"title":1},"319":{"body":6,"breadcrumbs":9,"title":3},"32":{"body":20,"breadcrumbs":8,"title":4},"320":{"body":1,"breadcrumbs":8,"title":2},"321":{"body":13,"breadcrumbs":10,"title":4},"322":{"body":217,"breadcrumbs":7,"title":1},"323":{"body":60,"breadcrumbs":16,"title":8},"324":{"body":32,"breadcrumbs":9,"title":1},"325":{"body":98,"breadcrumbs":9,"title":1},"326":{"body":9,"breadcrumbs":9,"title":1},"327":{"body":23,"breadcrumbs":9,"title":1},"328":{"body":225,"breadcrumbs":9,"title":1},"329":{"body":125,"breadcrumbs":11,"title":3},"33":{"body":2,"breadcrumbs":7,"title":3},"330":{"body":44,"breadcrumbs":10,"title":2},"331":{"body":65,"breadcrumbs":11,"title":3},"332":{"body":2,"breadcrumbs":9,"title":1},"333":{"body":94,"breadcrumbs":11,"title":3},"334":{"body":0,"breadcrumbs":11,"title":3},"335":{"body":91,"breadcrumbs":9,"title":1},"336":{"body":1,"breadcrumbs":9,"title":1},"337":{"body":1,"breadcrumbs":9,"title":1},"338":{"body":1,"breadcrumbs":11,"title":3},"339":{"body":21,"breadcrumbs":10,"title":2},"34":{"body":1,"breadcrumbs":7,"title":3},"340":{"body":74,"breadcrumbs":12,"title":4},"341":{"body":56,"breadcrumbs":12,"title":6},"342":{"body":33,"breadcrumbs":7,"title":1},"343":{"body":122,"breadcrumbs":7,"title":1},"344":{"body":4,"breadcrumbs":7,"title":1},"345":{"body":0,"breadcrumbs":7,"title":1},"346":{"body":150,"breadcrumbs":9,"title":3},"347":{"body":100,"breadcrumbs":9,"title":3},"348":{"body":34,"breadcrumbs":7,"title":1},"349":{"body":33,"breadcrumbs":9,"title":3},"35":{"body":53,"breadcrumbs":10,"title":5},"350":{"body":0,"breadcrumbs":9,"title":3},"351":{"body":19,"breadcrumbs":7,"title":1},"352":{"body":16,"breadcrumbs":7,"title":1},"353":{"body":47,"breadcrumbs":7,"title":1},"354":{"body":8,"breadcrumbs":9,"title":3},"355":{"body":2,"breadcrumbs":8,"title":2},"356":{"body":22,"breadcrumbs":10,"title":4},"357":{"body":69,"breadcrumbs":16,"title":8},"358":{"body":38,"breadcrumbs":9,"title":1},"359":{"body":79,"breadcrumbs":9,"title":1},"36":{"body":40,"breadcrumbs":6,"title":1},"360":{"body":19,"breadcrumbs":9,"title":1},"361":{"body":386,"breadcrumbs":9,"title":1},"362":{"body":20,"breadcrumbs":9,"title":1},"363":{"body":35,"breadcrumbs":11,"title":3},"364":{"body":0,"breadcrumbs":11,"title":3},"365":{"body":9,"breadcrumbs":9,"title":1},"366":{"body":30,"breadcrumbs":9,"title":1},"367":{"body":20,"breadcrumbs":9,"title":1},"368":{"body":36,"breadcrumbs":10,"title":2},"369":{"body":21,"breadcrumbs":12,"title":4},"37":{"body":203,"breadcrumbs":6,"title":1},"370":{"body":54,"breadcrumbs":14,"title":7},"371":{"body":68,"breadcrumbs":8,"title":1},"372":{"body":87,"breadcrumbs":8,"title":1},"373":{"body":59,"breadcrumbs":8,"title":1},"374":{"body":18,"breadcrumbs":8,"title":1},"375":{"body":0,"breadcrumbs":8,"title":1},"376":{"body":64,"breadcrumbs":8,"title":1},"377":{"body":66,"breadcrumbs":8,"title":1},"378":{"body":142,"breadcrumbs":8,"title":1},"379":{"body":19,"breadcrumbs":9,"title":2},"38":{"body":34,"breadcrumbs":6,"title":1},"380":{"body":104,"breadcrumbs":9,"title":2},"381":{"body":151,"breadcrumbs":9,"title":2},"382":{"body":2,"breadcrumbs":8,"title":1},"383":{"body":28,"breadcrumbs":10,"title":3},"384":{"body":7,"breadcrumbs":9,"title":2},"385":{"body":10,"breadcrumbs":8,"title":1},"386":{"body":52,"breadcrumbs":16,"title":8},"387":{"body":16,"breadcrumbs":9,"title":1},"388":{"body":224,"breadcrumbs":9,"title":1},"389":{"body":5,"breadcrumbs":9,"title":1},"39":{"body":8,"breadcrumbs":6,"title":1},"390":{"body":482,"breadcrumbs":9,"title":1},"391":{"body":87,"breadcrumbs":9,"title":1},"392":{"body":137,"breadcrumbs":11,"title":3},"393":{"body":0,"breadcrumbs":11,"title":3},"394":{"body":23,"breadcrumbs":9,"title":1},"395":{"body":1,"breadcrumbs":9,"title":1},"396":{"body":11,"breadcrumbs":9,"title":1},"397":{"body":6,"breadcrumbs":11,"title":3},"398":{"body":1,"breadcrumbs":10,"title":2},"399":{"body":17,"breadcrumbs":12,"title":4},"4":{"body":124,"breadcrumbs":6,"title":2},"40":{"body":176,"breadcrumbs":6,"title":1},"400":{"body":23,"breadcrumbs":10,"title":5},"401":{"body":36,"breadcrumbs":6,"title":1},"402":{"body":21,"breadcrumbs":6,"title":1},"403":{"body":4,"breadcrumbs":6,"title":1},"404":{"body":290,"breadcrumbs":6,"title":1},"405":{"body":53,"breadcrumbs":18,"title":9},"406":{"body":52,"breadcrumbs":10,"title":1},"407":{"body":87,"breadcrumbs":10,"title":1},"408":{"body":23,"breadcrumbs":10,"title":1},"409":{"body":5,"breadcrumbs":10,"title":1},"41":{"body":18,"breadcrumbs":7,"title":2},"410":{"body":270,"breadcrumbs":10,"title":1},"411":{"body":13,"breadcrumbs":10,"title":1},"412":{"body":46,"breadcrumbs":12,"title":3},"413":{"body":0,"breadcrumbs":12,"title":3},"414":{"body":29,"breadcrumbs":10,"title":1},"415":{"body":12,"breadcrumbs":10,"title":1},"416":{"body":10,"breadcrumbs":10,"title":1},"417":{"body":18,"breadcrumbs":12,"title":3},"418":{"body":2,"breadcrumbs":11,"title":2},"419":{"body":1,"breadcrumbs":13,"title":4},"42":{"body":11,"breadcrumbs":6,"title":1},"420":{"body":55,"breadcrumbs":16,"title":8},"421":{"body":24,"breadcrumbs":9,"title":1},"422":{"body":67,"breadcrumbs":9,"title":1},"423":{"body":27,"breadcrumbs":9,"title":1},"424":{"body":0,"breadcrumbs":9,"title":1},"425":{"body":27,"breadcrumbs":9,"title":1},"426":{"body":15,"breadcrumbs":9,"title":1},"427":{"body":106,"breadcrumbs":9,"title":1},"428":{"body":13,"breadcrumbs":9,"title":1},"429":{"body":29,"breadcrumbs":11,"title":3},"43":{"body":25,"breadcrumbs":8,"title":3},"430":{"body":0,"breadcrumbs":11,"title":3},"431":{"body":17,"breadcrumbs":9,"title":1},"432":{"body":17,"breadcrumbs":9,"title":1},"433":{"body":16,"breadcrumbs":9,"title":1},"434":{"body":24,"breadcrumbs":11,"title":3},"435":{"body":30,"breadcrumbs":10,"title":2},"436":{"body":37,"breadcrumbs":12,"title":4},"437":{"body":48,"breadcrumbs":10,"title":5},"438":{"body":65,"breadcrumbs":6,"title":1},"439":{"body":190,"breadcrumbs":6,"title":1},"44":{"body":14,"breadcrumbs":8,"title":3},"440":{"body":25,"breadcrumbs":6,"title":1},"441":{"body":0,"breadcrumbs":6,"title":1},"442":{"body":367,"breadcrumbs":7,"title":2},"443":{"body":218,"breadcrumbs":7,"title":2},"444":{"body":80,"breadcrumbs":8,"title":3},"445":{"body":46,"breadcrumbs":6,"title":1},"446":{"body":23,"breadcrumbs":8,"title":3},"447":{"body":4,"breadcrumbs":7,"title":2},"448":{"body":48,"breadcrumbs":16,"title":8},"449":{"body":12,"breadcrumbs":9,"title":1},"45":{"body":21,"breadcrumbs":6,"title":1},"450":{"body":102,"breadcrumbs":9,"title":1},"451":{"body":44,"breadcrumbs":9,"title":1},"452":{"body":58,"breadcrumbs":9,"title":1},"453":{"body":19,"breadcrumbs":9,"title":1},"454":{"body":40,"breadcrumbs":11,"title":3},"455":{"body":0,"breadcrumbs":11,"title":3},"456":{"body":15,"breadcrumbs":9,"title":1},"457":{"body":49,"breadcrumbs":10,"title":2},"458":{"body":1,"breadcrumbs":11,"title":3},"459":{"body":62,"breadcrumbs":10,"title":2},"46":{"body":16,"breadcrumbs":6,"title":1},"460":{"body":18,"breadcrumbs":12,"title":4},"461":{"body":40,"breadcrumbs":12,"title":6},"462":{"body":61,"breadcrumbs":7,"title":1},"463":{"body":101,"breadcrumbs":7,"title":1},"464":{"body":15,"breadcrumbs":7,"title":1},"465":{"body":187,"breadcrumbs":7,"title":1},"466":{"body":17,"breadcrumbs":7,"title":1},"467":{"body":12,"breadcrumbs":9,"title":3},"468":{"body":0,"breadcrumbs":9,"title":3},"469":{"body":11,"breadcrumbs":7,"title":1},"47":{"body":8,"breadcrumbs":6,"title":1},"470":{"body":40,"breadcrumbs":8,"title":2},"471":{"body":1,"breadcrumbs":9,"title":3},"472":{"body":1,"breadcrumbs":8,"title":2},"473":{"body":39,"breadcrumbs":10,"title":4},"474":{"body":42,"breadcrumbs":20,"title":10},"475":{"body":28,"breadcrumbs":11,"title":1},"476":{"body":121,"breadcrumbs":11,"title":1},"477":{"body":15,"breadcrumbs":11,"title":1},"478":{"body":70,"breadcrumbs":11,"title":1},"479":{"body":0,"breadcrumbs":13,"title":3},"48":{"body":0,"breadcrumbs":8,"title":3},"480":{"body":26,"breadcrumbs":11,"title":1},"481":{"body":26,"breadcrumbs":11,"title":1},"482":{"body":7,"breadcrumbs":11,"title":1},"483":{"body":19,"breadcrumbs":13,"title":3},"484":{"body":50,"breadcrumbs":12,"title":6},"485":{"body":46,"breadcrumbs":7,"title":1},"486":{"body":24,"breadcrumbs":7,"title":1},"487":{"body":8,"breadcrumbs":7,"title":1},"488":{"body":134,"breadcrumbs":7,"title":1},"489":{"body":21,"breadcrumbs":7,"title":1},"49":{"body":16,"breadcrumbs":7,"title":2},"490":{"body":19,"breadcrumbs":9,"title":3},"491":{"body":0,"breadcrumbs":9,"title":3},"492":{"body":8,"breadcrumbs":7,"title":1},"493":{"body":11,"breadcrumbs":7,"title":1},"494":{"body":19,"breadcrumbs":7,"title":1},"495":{"body":1,"breadcrumbs":9,"title":3},"496":{"body":1,"breadcrumbs":8,"title":2},"497":{"body":3,"breadcrumbs":10,"title":4},"498":{"body":49,"breadcrumbs":14,"title":7},"499":{"body":14,"breadcrumbs":8,"title":1},"5":{"body":153,"breadcrumbs":5,"title":1},"50":{"body":21,"breadcrumbs":8,"title":3},"500":{"body":113,"breadcrumbs":8,"title":1},"501":{"body":6,"breadcrumbs":8,"title":1},"502":{"body":180,"breadcrumbs":8,"title":1},"503":{"body":91,"breadcrumbs":8,"title":1},"504":{"body":25,"breadcrumbs":10,"title":3},"505":{"body":0,"breadcrumbs":10,"title":3},"506":{"body":44,"breadcrumbs":8,"title":1},"507":{"body":13,"breadcrumbs":8,"title":1},"508":{"body":31,"breadcrumbs":8,"title":1},"509":{"body":1,"breadcrumbs":10,"title":3},"51":{"body":2,"breadcrumbs":7,"title":2},"510":{"body":1,"breadcrumbs":9,"title":2},"511":{"body":12,"breadcrumbs":11,"title":4},"52":{"body":13,"breadcrumbs":9,"title":4},"53":{"body":57,"breadcrumbs":16,"title":8},"54":{"body":29,"breadcrumbs":9,"title":1},"55":{"body":185,"breadcrumbs":9,"title":1},"56":{"body":15,"breadcrumbs":9,"title":1},"57":{"body":60,"breadcrumbs":9,"title":1},"58":{"body":110,"breadcrumbs":11,"title":3},"59":{"body":153,"breadcrumbs":11,"title":3},"6":{"body":90,"breadcrumbs":5,"title":1},"60":{"body":96,"breadcrumbs":9,"title":1},"61":{"body":144,"breadcrumbs":11,"title":3},"62":{"body":0,"breadcrumbs":11,"title":3},"63":{"body":110,"breadcrumbs":9,"title":1},"64":{"body":1,"breadcrumbs":9,"title":1},"65":{"body":1,"breadcrumbs":9,"title":1},"66":{"body":1,"breadcrumbs":11,"title":3},"67":{"body":21,"breadcrumbs":10,"title":2},"68":{"body":11,"breadcrumbs":12,"title":4},"69":{"body":44,"breadcrumbs":14,"title":7},"7":{"body":54,"breadcrumbs":5,"title":1},"70":{"body":43,"breadcrumbs":8,"title":1},"71":{"body":81,"breadcrumbs":8,"title":1},"72":{"body":10,"breadcrumbs":8,"title":1},"73":{"body":101,"breadcrumbs":8,"title":1},"74":{"body":70,"breadcrumbs":9,"title":2},"75":{"body":16,"breadcrumbs":8,"title":1},"76":{"body":1,"breadcrumbs":10,"title":3},"77":{"body":39,"breadcrumbs":10,"title":3},"78":{"body":7,"breadcrumbs":10,"title":3},"79":{"body":2,"breadcrumbs":9,"title":2},"8":{"body":0,"breadcrumbs":5,"title":1},"80":{"body":48,"breadcrumbs":12,"title":6},"81":{"body":35,"breadcrumbs":7,"title":1},"82":{"body":152,"breadcrumbs":7,"title":1},"83":{"body":32,"breadcrumbs":7,"title":1},"84":{"body":4,"breadcrumbs":7,"title":1},"85":{"body":0,"breadcrumbs":7,"title":1},"86":{"body":92,"breadcrumbs":8,"title":2},"87":{"body":52,"breadcrumbs":8,"title":2},"88":{"body":32,"breadcrumbs":7,"title":1},"89":{"body":111,"breadcrumbs":7,"title":1},"9":{"body":438,"breadcrumbs":5,"title":1},"90":{"body":19,"breadcrumbs":9,"title":3},"91":{"body":4,"breadcrumbs":7,"title":1},"92":{"body":8,"breadcrumbs":7,"title":1},"93":{"body":5,"breadcrumbs":7,"title":1},"94":{"body":26,"breadcrumbs":9,"title":3},"95":{"body":2,"breadcrumbs":8,"title":2},"96":{"body":34,"breadcrumbs":10,"title":4},"97":{"body":48,"breadcrumbs":10,"title":5},"98":{"body":20,"breadcrumbs":6,"title":1},"99":{"body":30,"breadcrumbs":6,"title":1}},"docs":{"0":{"body":"This book contains the Polkadot Fellowship Requests for Comments (RFCs) detailing proposed changes to the technical implementation of the Polkadot network. polkadot-fellows/RFCs","breadcrumbs":"Introduction » Introduction","id":"0","title":"Introduction"},"1":{"body":"(source) Table of Contents RFC-1: Agile Coretime Summary Motivation Present System Problems Requirements Stakeholders Explanation Overview Detail Specific functions of the Coretime-chain Notes on the Instantaneous Coretime Market Notes on Economics Notes on Types Rollout Performance, Ergonomics and Compatibility Testing, Security and Privacy Future Directions and Related Material Drawbacks, Alternatives and Unknowns Prior Art and References Start Date 30 June 2023 Description Agile periodic-sale-based model for assigning Coretime on the Polkadot Ubiquitous Computer. Authors Gavin Wood","breadcrumbs":"RFC-1: Agile Coretime » RFC-1: Agile Coretime","id":"1","title":"RFC-1: Agile Coretime"},"10":{"body":"Parameters This proposal includes a number of parameters which need not necessarily be fixed. Their usage is explained below, but their values are suggested or specified in the later section Parameter Values . Reservations and Leases The Coretime-chain includes some governance-set reservations of Coretime; these cover every System-chain. Additionally, governance is expected to initialize details of the pre-existing leased chains. Regions A Region is an assignable period of Coretime with a known regularity. All Regions are associated with a unique Core Index , to identify which core the assignment of which ownership of the Region controls. All Regions are also associated with a Core Mask , an 80-bit bitmap, to denote the regularity at which it may be scheduled on the core. If all bits are set in the Core Mask value, it is said to be Complete . 80 is selected since this results in the size of the datatype used to identify any Region of Polkadot Coretime to be a very convenient 128-bit. Additionally, if TIMESLICE (the number of Relay-chain blocks in a Timeslice) is 80, then a single bit in the Core Mask bitmap represents exactly one Core for one Relay-chain block in one Timeslice. All Regions have a span. Region spans are quantized into periods of TIMESLICE blocks; BULK_PERIOD divides into TIMESLICE a whole number of times. The Timeslice type is a u32 which can be multiplied by TIMESLICE to give a BlockNumber value representing the same quantity in terms of Relay-chain blocks. Regions can be tasked to a TaskId (aka ParaId) or pooled into the Instantaneous Coretime Pool. This process can be Provisional or Final . If done only provisionally or not at all then they are fresh and have an Owner which is able to manipulate them further including reassignment. Once Final , then all ownership information is discarded and they cannot be manipulated further. Renewal is not possible when only provisionally tasked/pooled. Bulk Sales A sale of Bulk Coretime occurs on the Coretime-chain every BULK_PERIOD blocks. In every sale, a BULK_LIMIT of individual Regions are offered for sale. Each Region offered for sale has a different Core Index, ensuring that they each represent an independently allocatable resource on the Polkadot UC. The Regions offered for sale have the same span: they last exactly BULK_PERIOD blocks, and begin immediately following the span of the previous Sale's Regions. The Regions offered for sale also have the complete, non-interlaced, Core Mask. The Sale Period ends immediately as soon as span of the Coretime Regions that are being sold begins. At this point, the next Sale Price is set according to the previous Sale Price together with the number of Regions sold compared to the desired and maximum amount of Regions to be sold. See Price Setting for additional detail on this point. Following the end of the previous Sale Period, there is an Interlude Period lasting INTERLUDE_PERIOD of blocks. After this period is elapsed, regular purchasing begins with the Purchasing Period . This is designed to give at least two weeks worth of time for the purchased regions to be partitioned, interlaced, traded and allocated. The Interlude The Interlude period is a period prior to Regular Purchasing where renewals are allowed to happen. This has the effect of ensuring existing long-term tasks/parachains have a chance to secure their Bulk Coretime for a well-known price prior to general sales. Regular Purchasing Any account may purchase Regions of Bulk Coretime if they have the appropriate funds in place during the Purchasing Period, which is from INTERLUDE_PERIOD blocks after the end of the previous sale until the beginning of the Region of the Bulk Coretime which is for sale as long as there are Regions of Bulk Coretime left for sale (i.e. no more than BULK_LIMIT have already been sold in the Bulk Coretime Sale). The Purchasing Period is thus roughly BULK_PERIOD - INTERLUDE_PERIOD blocks in length. The Sale Price varies during an initial portion of the Purchasing Period called the Leadin Period and then stays stable for the remainder. This initial portion is LEADIN_PERIOD blocks in duration. During the Leadin Period the price decreases towards the Sale Price, which it lands at by the end of the Leadin Period. The actual curve by which the price starts and descends to the Sale Price is outside the scope of this RFC, though a basic suggestion is provided in the Price Setting Notes, below. Renewals At any time when there are remaining Regions of Bulk Coretime to be sold, including during the Interlude Period , then certain Bulk Coretime assignmnents may be Renewed . This is similar to a purchase in that funds must be paid and it consumes one of the Regions of Bulk Coretime which would otherwise be placed for purchase. However there are two key differences. Firstly, the price paid is the minimum of RENEWAL_PRICE_CAP more than what the purchase/renewal price was in the previous renewal and the current (or initial, if yet to begin) regular Sale Price. Secondly, the purchased Region comes preassigned with exactly the same workload as before. It cannot be traded, repartitioned, interlaced or exchanged. As such unlike regular purchasing the Region never has an owner. Renewal is only possible for either cores which have been assigned as a result of a previous renewal, which are migrating from legacy slot leases, or which fill their Bulk Coretime with an unsegmented, fully and finally assigned workload which does not include placement in the Instantaneous Coretime Pool. The renewed workload will be the same as this initial workload. Manipulation Regions may be manipulated in various ways by its owner: Transferred in ownership. Partitioned into quantized, non-overlapping segments of Bulk Coretime with the same ownership. Interlaced into multiple Regions over the same period whose eventual assignments take turns to be scheduled. Assigned to a single, specific task (identified by TaskId aka ParaId). This may be either provisional or final . Pooled into the Instantaneous Coretime Pool, in return for a pro-rata amount of the revenue from the Instantaneous Coretime Sales over its period. Enactment","breadcrumbs":"RFC-1: Agile Coretime » Detail","id":"10","title":"Detail"},"100":{"body":"Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have. Kusama Network: Tokenholders can easily see the changes of all system chains in one place. Encointer Association: Further decentralization of the Encointer Network necessities like devops. Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.","breadcrumbs":"RFC-0022: Adopt Encointer Runtime » Stakeholders","id":"100","title":"Stakeholders"},"101":{"body":"Our PR has all details about our runtime and how we would move it into the fellowship repo. Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum. Further notes: Encointer will publish all its crates crates.io Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.","breadcrumbs":"RFC-0022: Adopt Encointer Runtime » Explanation","id":"101","title":"Explanation"},"102":{"body":"Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.","breadcrumbs":"RFC-0022: Adopt Encointer Runtime » Drawbacks","id":"102","title":"Drawbacks"},"103":{"body":"No changes to the existing system are proposed. Only changes to how maintenance is organized.","breadcrumbs":"RFC-0022: Adopt Encointer Runtime » Testing, Security, and Privacy","id":"103","title":"Testing, Security, and Privacy"},"104":{"body":"No changes","breadcrumbs":"RFC-0022: Adopt Encointer Runtime » Performance, Ergonomics, and Compatibility","id":"104","title":"Performance, Ergonomics, and Compatibility"},"105":{"body":"Existing Encointer runtime repo","breadcrumbs":"RFC-0022: Adopt Encointer Runtime » Prior Art and References","id":"105","title":"Prior Art and References"},"106":{"body":"None identified","breadcrumbs":"RFC-0022: Adopt Encointer Runtime » Unresolved Questions","id":"106","title":"Unresolved Questions"},"107":{"body":"More info on Encointer: encointer.org","breadcrumbs":"RFC-0022: Adopt Encointer Runtime » Future Directions and Related Material","id":"107","title":"Future Directions and Related Material"},"108":{"body":"(source) Table of Contents RFC-0032: Minimal Relay Summary Motivation Stakeholders Explanation Migrations Interfaces Functional Architecture Resource Allocation Deployment Kusama Drawbacks Testing, Security, and Privacy Performance, Ergonomics, and Compatibility Performance Ergonomics Compatibility Prior Art and References Unresolved Questions Future Directions and Related Material Start Date 20 September 2023 Description Proposal to minimise Relay Chain functionality. Authors Joe Petrowski, Gavin Wood","breadcrumbs":"RFC-0032: Minimal Relay » RFC-0032: Minimal Relay","id":"108","title":"RFC-0032: Minimal Relay"},"109":{"body":"The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.","breadcrumbs":"RFC-0032: Minimal Relay » Summary","id":"109","title":"Summary"},"11":{"body":"Several functions of the Coretime-chain SHALL be exposed through dispatchables and/or a nonfungible trait implementation integrated into XCM: 1. transfer Regions may have their ownership transferred. A transfer(region: RegionId, new_owner: AccountId) dispatchable shall have the effect of altering the current owner of the Region identified by region from the signed origin to new_owner. An implementation of the nonfungible trait SHOULD include equivalent functionality. RegionId SHOULD be used for the AssetInstance value. 2. partition Regions may be split apart into two non-overlapping interior Regions of the same Core Mask which together concatenate to the original Region. A partition(region: RegionId, pivot: Timeslice) dispatchable SHALL have the effect of removing the Region identified by region and adding two new Regions of the same owner and Core Mask. One new Region will begin at the same point of the old Region but end at pivot timeslices into the Region, whereas the other will begin at this point and end at the end point of the original Region. Also: owner field of region must the equal to the Signed origin. pivot must equal neither the begin nor end fields of the region. 3. interlace Regions may be decomposed into two Regions of the same span whose eventual assignments take turns on the core by virtue of having complementary Core Masks. An interlace(region: RegionId, mask: CoreMask) dispatchable shall have the effect of removing the Region identified by region and creating two new Regions. The new Regions will each have the same span and owner of the original Region, but one Region will have a Core Mask equal to mask and the other will have Core Mask equal to the XOR of mask and the Core Mask of the original Region. Also: owner field of region must the equal to the Signed origin. mask must have some bits set AND must not equal the Core Mask of the old Region AND must only have bits set which are also set in the old Region's' Core Mask. 4. assign Regions may be assigned to a core. A assign(region: RegionId, target: TaskId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the target task. If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice. finality may have the value of either Final or Provisional. If Final, then the operation is free, the region record is removed entirely from storage and renewal may be possible: if the Region's span is the entire BULK_PERIOD, then the Coretime-chain records in storage that the allocation happened during this period in order to facilitate the possibility for a renewal. (Renewal only becomes possible when the full Core Mask of a core is finally assigned for the full BULK_PERIOD.) Also: owner field of region must the equal to the Signed origin. 5. pool Regions may be consumed in exchange for a pro rata portion of the Instantaneous Coretime Sales Revenue from its period and regularity. A pool(region: RegionId, beneficiary: AccountId, finality: Finality) dispatchable shall have the effect of placing an item in the workplan corresponding to the region's properties and assigned to the Instantaneous Coretime Pool. The details of the region will be recorded in order to allow for a pro rata share of the Instantaneous Coretime Sales Revenue at the time of the Region relative to any other providers in the Pool. If the region's end has already passed (taking into account any advance notice requirements) then this operation is a no-op. If the region's begining has already passed, then it is effectively altered to become the next schedulable timeslice. finality may have the value of either Final or Provisional. If Final, then the operation is free and the region record is removed entirely from storage. Also: owner field of region must the equal to the Signed origin. 6. Purchases A dispatchable purchase(price_limit: Balance) shall be provided. Any account may call purchase to purchase Bulk Coretime at the maximum price of price_limit. This may be called successfully only: during the regular Purchasing Period; when the caller is a Signed origin and their account balance is reducible by the current sale price; when the current sale price is no greater than price_limit; and when the number of cores already sold is less than BULK_LIMIT. If successful, the caller's account balance is reduced by the current sale price and a new Region item for the following Bulk Coretime span is issued with the owner equal to the caller's account. 7. Renewals A dispatchable renew(core: CoreIndex) shall be provided. Any account may call renew to purchase Bulk Coretime and renew an active allocation for the given core. This may be called during the Interlude Period as well as the regular Purchasing Period and has the same effect as purchase followed by assign, except that: The price of the sale is the Renewal Price (see next). The Region is allocated exactly the given core is currently allocated for the present Region. Renewal is only valid where a Region's span is assigned to Tasks (not placed in the Instantaneous Coretime Pool) for the entire unsplit BULK_PERIOD over all of the Core Mask and with Finality. There are thus three possibilities of a renewal being allowed: Purchased unsplit Coretime with final assignment to tasks over the full Core Mask. Renewed Coretime. A legacy lease which is ending. Renewal Price The Renewal Price is the minimum of the current regular Sale Price (or the initial Sale Price if in the Interlude Period) and: If the workload being renewed came to be through the Purchase and Assignment of Bulk Coretime, then the price paid during that Purchase operation. If the workload being renewed was previously renewed, then the price paid during this previous Renewal operation plus RENEWAL_PRICE_CAP. If the workload being renewed is a migation from a legacy slot auction lease, then the nominal price for a Regular Purchase (outside of the Lead-in Period) of the Sale during which the legacy lease expires. 8. Instantaneous Coretime Credits A dispatchable purchase_credit(amount: Balance, beneficiary: RelayChainAccountId) shall be provided. Any account with at least amount spendable funds may call this. This increases the Instantaneous Coretime Credit balance on the Relay-chain of the beneficiary by the given amount. This Credit is consumable on the Relay-chain as part of the Task scheduling system and its specifics are out of the scope of this proposal. When consumed, revenue is recorded and provided to the Coretime-chain for proper distribution. The API for doing this is specified in RFC-5.","breadcrumbs":"RFC-1: Agile Coretime » Specific functions of the Coretime-chain","id":"11","title":"Specific functions of the Coretime-chain"},"110":{"body":"Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing group) of the Relay Chain's validator set. However, state transitions on the Relay Chain need to be executed by all validators. If any of those state transitions can occur on parachains, then the resources of the complement of a single backing group could be used to offer more cores. As in, they could be offering more coretime (a.k.a. blockspace) to the network. By minimising state transition logic on the Relay Chain by migrating it into \"system chains\" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.","breadcrumbs":"RFC-0032: Minimal Relay » Motivation","id":"110","title":"Motivation"},"111":{"body":"Parachains that interact with affected logic on the Relay Chain; Core protocol and XCM format developers; Tooling, block explorer, and UI developers.","breadcrumbs":"RFC-0032: Minimal Relay » Stakeholders","id":"111","title":"Stakeholders"},"112":{"body":"The following pallets and subsystems are good candidates to migrate from the Relay Chain: Identity Balances Staking Staking Election Provider Bags List NIS Nomination Pools Fast Unstake Governance Treasury and Bounties Conviction Voting Referenda Note: The Auctions and Crowdloan pallets will be replaced by Coretime, its system chain and interface described in RFC-1 and RFC-5, respectively.","breadcrumbs":"RFC-0032: Minimal Relay » Explanation","id":"112","title":"Explanation"},"113":{"body":"Some subsystems are simpler to move than others. For example, migrating Identity can be done by simply preventing state changes in the Relay Chain, using the Identity-related state as the genesis for a new chain, and launching that new chain with the genesis and logic (pallet) needed. Other subsystems cannot experience any downtime like this because they are essential to the network's functioning, like Staking and Governance. However, these can likely coexist with a similarly-permissioned system chain for some time, much like how \"Gov1\" and \"OpenGov\" coexisted at the latter's introduction. Specific migration plans will be included in release notes of runtimes from the Polkadot Fellowship when beginning the work of migrating a particular subsystem.","breadcrumbs":"RFC-0032: Minimal Relay » Migrations","id":"113","title":"Migrations"},"114":{"body":"The Relay Chain, in many cases, will still need to interact with these subsystems, especially Staking and Governance. These subsystems will require making some APIs available either via dispatchable calls accessible to XCM Transact or possibly XCM Instructions in future versions. For example, Staking provides a pallet-API to register points (e.g. for block production) and offences (e.g. equivocation). With Staking in a system chain, that chain would need to allow the Relay Chain to update validator points periodically so that it can correctly calculate rewards. A pub-sub protocol may also lend itself to these types of interactions.","breadcrumbs":"RFC-0032: Minimal Relay » Interfaces","id":"114","title":"Interfaces"},"115":{"body":"This RFC proposes that system chains form individual components within the system's architecture and that these components are chosen as functional groups. This approach allows synchronous composibility where it is most valuable, but isolates logic in such a way that provides flexibility for optimal resource allocation (see Resource Allocation ). For the subsystems discussed in this RFC, namely Identity, Governance, and Staking, this would mean: People Chain, for identity and personhood logic, providing functionality related to the attributes of single actors; Governance Chain, for governance and system collectives, providing functionality for pluralities to express their voices within the system; Staking Chain, for Polkadot's staking system, including elections, nominations, reward distribution, slashing, and non-interactive staking; and Asset Hub, for fungible and non-fungible assets, including DOT. The Collectives chain and Asset Hub already exist, so implementation of this RFC would mean two new chains (People and Staking), with Governance moving to the currently-known-as Collectives chain and Asset Hub being increasingly used for DOT over the Relay Chain. Note that one functional group will likely include many pallets, as we do not know how pallet configurations and interfaces will evolve over time.","breadcrumbs":"RFC-0032: Minimal Relay » Functional Architecture","id":"115","title":"Functional Architecture"},"116":{"body":"The system should minimise wasted blockspace. These three (and other) subsystems may not each consistently require a dedicated core. However, core scheduling is far more agile than functional grouping. While migrating functionality from one chain to another can be a multi-month endeavour, cores can be rescheduled almost on-the-fly. Migrations are also breaking changes to some use cases, for example other parachains that need to route XCM programs to particular chains. It is thus preferable to do them a single time in migrating off the Relay Chain, reducing the risk of needing parachain splits in the future. Therefore, chain boundaries should be based on functional grouping where synchronous composibility is most valuable; and efficient resource allocation should be managed by the core scheduling protocol. Many of these system chains (including Asset Hub) could often share a single core in a semi-round robin fashion (the coretime may not be uniform). When needed, for example during NPoS elections or slashing events, the scheduler could allocate a dedicated core to the chain in need of more throughput.","breadcrumbs":"RFC-0032: Minimal Relay » Resource Allocation","id":"116","title":"Resource Allocation"},"117":{"body":"Actual migrations should happen based on some prioritization. This RFC proposes to migrate Identity, Staking, and Governance as the systems to work on first. A brief discussion on the factors involved in each one: Identity Identity will be one of the simpler pallets to migrate into a system chain, as its logic is largely self-contained and it does not \"share\" balances with other subsystems. As in, any DOT is held in reserve as a storage deposit and cannot be simultaneously used the way locked DOT can be locked for multiple purposes. Therefore, migration can take place as follows: The pallet can be put in a locked state, blocking most calls to the pallet and preventing updates to identity info. The frozen state will form the genesis of a new system parachain. Functions will be added to the pallet that allow migrating the deposit to the parachain. The parachain deposit is on the order of 1/100th of the Relay Chain's. Therefore, this will result in freeing up Relay State as well as most of each user's reserved balance. The pallet and any leftover state can be removed from the Relay Chain. User interfaces that render Identity information will need to source their data from the new system parachain. Note: In the future, it may make sense to decommission Kusama's Identity chain and do all account identities via Polkadot's. However, the Kusama chain will serve as a dress rehearsal for Polkadot. Staking Migrating the staking subsystem will likely be the most complex technical undertaking, as the Staking system cannot stop (the system MUST always have a validator set) nor run in parallel (the system MUST have only one validator set) and the subsystem itself is made up of subsystems in the runtime and the node. For example, if offences are reported to the Staking parachain, validator nodes will need to submit their reports there. Handling balances also introduces complications. The same balance can be used for staking and governance. Ideally, all balances stay on Asset Hub, and only report \"credits\" to system chains like Staking and Governance. However, staking mutates balances by issuing new DOT on era changes and for rewards. Allowing DOT directly on the Staking parachain would simplify staking changes. Given the complexity, it would be pragmatic to include the Balances pallet in the Staking parachain in its first version. Any other systems that use overlapping locks, most notably governance, will need to recognise DOT held on both Asset Hub and the Staking parachain. There is more discussion about staking in a parachain in Moving Staking off the Relay Chain . Governance Migrating governance into a parachain will be less complicated than staking. Most of the primitives needed for the migration already exist. The Treasury supports spending assets on remote chains and collectives like the Polkadot Technical Fellowship already function in a parachain. That is, XCM already provides the ability to express system origins across chains. Therefore, actually moving the governance logic into a parachain will be simple. It can run in parallel with the Relay Chain's governance, which can be removed when the parachain has demonstrated sufficient functionality. It's possible that the Relay Chain maintain a Root-level emergency track for situations like parachains halting . The only complication arises from the fact that both Asset Hub and the Staking parachain will have DOT balances; therefore, the Governance chain will need to be able to credit users' voting power based on balances from both locations. This is not expected to be difficult to handle.","breadcrumbs":"RFC-0032: Minimal Relay » Deployment","id":"117","title":"Deployment"},"118":{"body":"Although Polkadot and Kusama both have system chains running, they have to date only been used for introducing new features or bodies, for example fungible assets or the Technical Fellowship. There has not yet been a migration of logic/state from the Relay Chain into a parachain. Given its more realistic network conditions than testnets, Kusama is the best stage for rehearsal. In the case of identity, Polkadot's system may be sufficient for the ecosystem. Therefore, Kusama should be used to test the migration of logic and state from Relay Chain to parachain, but these features may be (at the will of Kusama's governance) dropped from Kusama entirely after a successful migration on Polkadot. For Governance, Polkadot already has the Collectives parachain, which would become the Governance parachain. The entire group of DOT holders is itself a collective (the legislative body), and governance provides the means to express voice. Launching a Kusama Governance chain would be sensible to rehearse a migration. The Staking subsystem is perhaps where Kusama would provide the most value in its canary capacity. Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- will give confidence to the chain's robustness on Polkadot.","breadcrumbs":"RFC-0032: Minimal Relay » Kusama","id":"118","title":"Kusama"},"119":{"body":"These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints.","breadcrumbs":"RFC-0032: Minimal Relay » Drawbacks","id":"119","title":"Drawbacks"},"12":{"body":"For an efficient market to form around the provision of Bulk-purchased Cores into the pool of cores available for Instantaneous Coretime purchase, it is crucial to ensure that price changes for the purchase of Instantaneous Coretime are reflected well in the revenues of private Coretime providers during the same period. In order to ensure this, then it is crucial that Instantaneous Coretime, once purchased, cannot be held indefinitely prior to eventual use since, if this were the case, a nefarious collator could purchase Coretime when cheap and utilize it some time later when expensive and deprive private Coretime providers of their revenue. It must therefore be assumed that Instantaneous Coretime, once purchased, has a definite and short \"shelf-life\", after which it becomes unusable. This incentivizes collators to avoid purchasing Coretime unless they expect to utilize it imminently and thus helps create an efficient market-feedback mechanism whereby a higher price will actually result in material revenues for private Coretime providers who contribute to the pool of Cores available to service Instantaneous Coretime purchases.","breadcrumbs":"RFC-1: Agile Coretime » Notes on the Instantaneous Coretime Market","id":"12","title":"Notes on the Instantaneous Coretime Market"},"120":{"body":"Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement.","breadcrumbs":"RFC-0032: Minimal Relay » Testing, Security, and Privacy","id":"120","title":"Testing, Security, and Privacy"},"121":{"body":"Describe the impact of the proposal on the exposed functionality of Polkadot.","breadcrumbs":"RFC-0032: Minimal Relay » Performance, Ergonomics, and Compatibility","id":"121","title":"Performance, Ergonomics, and Compatibility"},"122":{"body":"This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance.","breadcrumbs":"RFC-0032: Minimal Relay » Performance","id":"122","title":"Performance"},"123":{"body":"This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development. For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network.","breadcrumbs":"RFC-0032: Minimal Relay » Ergonomics","id":"123","title":"Ergonomics"},"124":{"body":"Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network.","breadcrumbs":"RFC-0032: Minimal Relay » Compatibility","id":"124","title":"Compatibility"},"125":{"body":"Transactionless Relay-chain Moving Staking off the Relay Chain","breadcrumbs":"RFC-0032: Minimal Relay » Prior Art and References","id":"125","title":"Prior Art and References"},"126":{"body":"There remain some implementation questions, like how to use balances for both Staking and Governance. See, for example, Moving Staking off the Relay Chain .","breadcrumbs":"RFC-0032: Minimal Relay » Unresolved Questions","id":"126","title":"Unresolved Questions"},"127":{"body":"Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. With Staking and Governance off the Relay Chain, this is not an unreasonable next step. With Identity on Polkadot, Kusama may opt to drop its People Chain.","breadcrumbs":"RFC-0032: Minimal Relay » Future Directions and Related Material","id":"127","title":"Future Directions and Related Material"},"128":{"body":"(source) Table of Contents RFC-0050: Fellowship Salaries Summary Motivation Stakeholders Explanation Salary Asset Projections Updates Drawbacks Testing, Security, and Privacy Performance, Ergonomics, and Compatibility Performance Ergonomics Compatibility Prior Art and References Unresolved Questions Start Date 15 November 2023 Description Proposal to set rank-based Fellowship salary levels. Authors Joe Petrowski, Gavin Wood","breadcrumbs":"RFC-0050: Fellowship Salaries » RFC-0050: Fellowship Salaries","id":"128","title":"RFC-0050: Fellowship Salaries"},"129":{"body":"The Fellowship Manifesto states that members should receive a monthly allowance on par with gross income in OECD countries. This RFC proposes concrete amounts.","breadcrumbs":"RFC-0050: Fellowship Salaries » Summary","id":"129","title":"Summary"},"13":{"body":"The specific pricing mechanisms are out of scope for the present proposal. Proposals on economics should be properly described and discussed in another RFC. However, for the sake of completeness, I provide some basic illustration of how price setting could potentially work. Bulk Price Progression The present proposal assumes the existence of a price-setting mechanism which takes into account several parameters: OLD_PRICE: The price of the previous sale. BULK_TARGET: the target number of cores to be purchased as Bulk Coretime Regions or renewed during the previous sale. BULK_LIMIT: the maximum number of cores which could have been purchased/renewed during the previous sale. CORES_SOLD: the actual number of cores purchased/renewed in the previous sale. SELLOUT_PRICE: the price at which the most recent Bulk Coretime was purchased ( not renewed) prior to selling more cores than BULK_TARGET (or immediately after, if none were purchased before). This may not have a value if no Bulk Coretime was purchased. In general we would expect the price to increase the closer CORES_SOLD gets to BULK_LIMIT and to decrease the closer it gets to zero. If it is exactly equal to BULK_TARGET, then we would expect the price to remain the same. In the edge case that no cores were purchased yet more cores were sold (through renewals) than the target, then we would also avoid altering the price. A simple example of this would be the formula: IF SELLOUT_PRICE == NULL AND CORES_SOLD > BULK_TARGET THEN RETURN OLD_PRICE\nEND IF\nEFFECTIVE_PRICE := IF CORES_SOLD > BULK_TARGET THEN SELLOUT_PRICE\nELSE OLD_PRICE\nEND IF\nNEW_PRICE := IF CORES_SOLD < BULK_TARGET THEN EFFECTIVE_PRICE * MAX(CORES_SOLD, 1) / BULK_TARGET\nELSE EFFECTIVE_PRICE + EFFECTIVE_PRICE * (CORES_SOLD - BULK_TARGET) / (BULK_LIMIT - BULK_TARGET)\nEND IF This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal. Intra-Leadin Price-decrease During the Leadin Period of a sale, the effective price starts higher than the Sale Price and falls to end at the Sale Price at the end of the Leadin Period. The price can thus be defined as a simple factor above one on which the Sale Price is multiplied. A function which returns this factor would accept a factor between zero and one specifying the portion of the Leadin Period which has passed. Thus we assume SALE_PRICE, then we can define PRICE as: PRICE := SALE_PRICE * FACTOR((NOW - LEADIN_BEGIN) / LEADIN_PERIOD) We can define a very simple progression where the price decreases monotonically from double the Sale Price at the beginning of the Leadin Period. FACTOR(T) := 2 - T Parameter Values Parameters are either suggested or specified . If suggested , it is non-binding and the proposal should not be judged on the value since other RFCs and/or the governance mechanism of Polkadot is expected to specify/maintain it. If specified , then the proposal should be judged on the merit of the value as-is. Name Value BULK_PERIOD 28 * DAYS specified INTERLUDE_PERIOD 7 * DAYS specified LEADIN_PERIOD 7 * DAYS specified TIMESLICE 8 * MINUTES specified BULK_TARGET 30 suggested BULK_LIMIT 45 suggested RENEWAL_PRICE_CAP Perbill::from_percent(2) suggested Instantaneous Price Progression This proposal assumes the existence of a Relay-chain-based price-setting mechanism for the Instantaneous Coretime Market which alters from block to block, taking into account several parameters: the last price, the size of the Instantaneous Coretime Pool (in terms of cores per Relay-chain block) and the amount of Instantaneous Coretime waiting for processing (in terms of Core-blocks queued). The ideal situation is to have the size of the Instantaneous Coretime Pool be equal to some factor of the Instantaneous Coretime waiting. This allows all Instantaneous Coretime sales to be processed with some limited latency while giving limited flexibility over ordering to the Relay-chain apparatus which is needed for efficient operation. If we set a factor of three, and thus aim to retain a queue of Instantaneous Coretime Sales which can be processed within three Relay-chain blocks, then we would increase the price if the queue goes above three times the amount of cores available, and decrease if it goes under. Let us assume the values OLD_PRICE, FACTOR, QUEUE_SIZE and POOL_SIZE. A simple definition of the NEW_PRICE would be thus: NEW_PRICE := IF QUEUE_SIZE < POOL_SIZE * FACTOR THEN OLD_PRICE * 0.95\nELSE OLD_PRICE / 0.95\nEND IF This exists only as a trivial example to demonstrate a basic solution exists, and should not be intended as a concrete proposal.","breadcrumbs":"RFC-1: Agile Coretime » Notes on Economics","id":"13","title":"Notes on Economics"},"130":{"body":"One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and retain technical talent for the continued progress of the network. In order for members to uphold their commitment to the network, they should receive support to ensure that their needs are met such that they have the time to dedicate to their work on Polkadot. Given the high expectations of Fellows, it is reasonable to consider contributions and requirements on par with a full-time job. Providing a livable wage to those making such contributions makes it pragmatic to work full-time on Polkadot. Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.","breadcrumbs":"RFC-0050: Fellowship Salaries » Motivation","id":"130","title":"Motivation"},"131":{"body":"Fellowship members Polkadot Treasury","breadcrumbs":"RFC-0050: Fellowship Salaries » Stakeholders","id":"131","title":"Stakeholders"},"132":{"body":"This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to the amount or asset used would only be on a single value, and all others would adjust relatively. A III Dan is someone whose contributions match the expectations of a full-time individual contributor. The salary at this level should be reasonably close to averages in OECD countries. Dan Factor I 0.125 II 0.25 III 1 IV 1.5 V 2.0 VI 2.5 VII 2.5 VIII 2.5 IX 2.5 Note that there is a sizable increase between II Dan (Proficient) and III Dan (Fellow). By the third Dan, it is generally expected that one is working on Polkadot as their primary focus in a full-time capacity.","breadcrumbs":"RFC-0050: Fellowship Salaries » Explanation","id":"132","title":"Explanation"},"133":{"body":"Although the Manifesto (Section 8) specifies a monthly allowance in DOT, this RFC proposes the use of USDT instead. The allowance is meant to provide members stability in meeting their day-to-day needs and recognize contributions. Using USDT provides more stability and less speculation. This RFC proposes that a III Dan earn 80,000 USDT per year. The salary at this level is commensurate with average salaries in OECD countries (note: 77,000 USD in the U.S., with an average engineer at 100,000 USD). The other ranks would thus earn: Dan Annual Salary I 10,000 II 20,000 III 80,000 IV 120,000 V 160,000 VI 200,000 VII 200,000 VIII 200,000 IX 200,000 The salary levels for Architects (IV, V, and VI Dan) are typical of senior engineers. Allowances will be managed by the Salary pallet.","breadcrumbs":"RFC-0050: Fellowship Salaries » Salary Asset","id":"133","title":"Salary Asset"},"134":{"body":"Based on the current membership, the maximum yearly and monthly costs are shown below: Dan Salary Members Yearly Monthly I 10,000 27 270,000 22,500 II 20,000 11 220,000 18,333 III 80,000 8 640,000 53,333 IV 120,000 3 360,000 30,000 V 160,000 5 800,000 66,667 VI 200,000 3 600,000 50,000 > VI 200,000 0 0 0 Total 2,890,000 240,833 Note that these are the maximum amounts; members may choose to take a passive (lower) level. On the other hand, more people will likely join the Fellowship in the coming years.","breadcrumbs":"RFC-0050: Fellowship Salaries » Projections","id":"134","title":"Projections"},"135":{"body":"Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via RFC.","breadcrumbs":"RFC-0050: Fellowship Salaries » Updates","id":"135","title":"Updates"},"136":{"body":"By not using DOT for payment, the protocol relies on the stability of other assets and the ability to acquire them. However, the asset of choice can be changed in the future.","breadcrumbs":"RFC-0050: Fellowship Salaries » Drawbacks","id":"136","title":"Drawbacks"},"137":{"body":"N/A.","breadcrumbs":"RFC-0050: Fellowship Salaries » Testing, Security, and Privacy","id":"137","title":"Testing, Security, and Privacy"},"138":{"body":"","breadcrumbs":"RFC-0050: Fellowship Salaries » Performance, Ergonomics, and Compatibility","id":"138","title":"Performance, Ergonomics, and Compatibility"},"139":{"body":"N/A","breadcrumbs":"RFC-0050: Fellowship Salaries » Performance","id":"139","title":"Performance"},"14":{"body":"This exists only as a short illustration of a potential technical implementation and should not be treated as anything more. Regions This data schema achieves a number of goals: Coretime can be individually traded at a level of a single usage of a single core. Coretime Regions, of arbitrary span and up to 1/80th interlacing can be exposed as NFTs and exchanged. Any Coretime Region can be contributed to the Instantaneous Coretime Pool. Unlimited number of individual Coretime contributors to the Instantaneous Coretime Pool. (Effectively limited only in number of cores and interlacing level; with current values this would allow 80,000 individual payees per timeslice). All keys are self-describing. Workload to communicate core (re-)assignments is well-bounded and low in weight. All mandatory bookkeeping workload is well-bounded in weight. type Timeslice = u32; // 80 block amounts.\ntype CoreIndex = u16;\ntype CoreMask = [u8; 10]; // 80-bit bitmap. // 128-bit (16 bytes)\nstruct RegionId { begin: Timeslice, core: CoreIndex, mask: CoreMask,\n}\n// 296-bit (37 bytes)\nstruct RegionRecord { end: Timeslice, owner: AccountId,\n} map Regions = Map