diff --git a/404.html b/404.html index b11c02b8f..4e17be0dc 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index a066a834d..bf682103e 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index 1c2cb298f..05879f861 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 4986be3ea..864079b28 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index 357d1e57b..f139d9123 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index a7a6940dd..e850e930f 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 027e999a3..39f90d782 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index b3230ae55..61ade4191 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index aa203152f..5d070f21b 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index c9fb9791d..d6ea1f197 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index 899280ed5..e8488578e 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ @@ -271,7 +271,7 @@

None.

None. This is a simple isolated change.

-

(source)

+

(source)

Table of Contents

-

RFC-0004: Remove the host-side runtime memory allocator

-
- - - -
Start Date2023-07-04
DescriptionUpdate the runtime-host interface to no longer make use of a host-side allocator
AuthorsPierre Krieger
-
-

Summary

-

Update the runtime-host interface to no longer make use of a host-side allocator.

-

Motivation

-

The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.

-

The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer in order to free the buffer.

-

Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.

-

Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.

-

Stakeholders

-

No attempt was made at convincing stakeholders.

-

Explanation

-

New host functions

-

This section contains a list of new host functions to introduce.

-
(func $ext_storage_read_version_2
-    (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
-(func $ext_default_child_storage_read_version_2
-    (param $child_storage_key i64) (param $key i64) (param $value_out i64)
-    (param $offset i32) (result i64))
-
-

The signature and behaviour of ext_storage_read_version_2 and ext_default_child_storage_read_version_2 is identical to their version 1 counterparts, but the return value has a different meaning. -The new functions directly return the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

-

The runtime execution stops with an error if value_out is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.

-
(func $ext_storage_next_key_version_2
-    (param $key i64) (param $out i64) (return i32))
-(func $ext_default_child_storage_next_key_version_2
-    (param $child_storage_key i64) (param $key i64) (param $out i64) (return i32))
-
-

The behaviour of these functions is identical to their version 1 counterparts. -Instead of allocating a buffer, writing the next key to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing a pointer-size to the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. -These functions return the size, in bytes, of the next key, or 0 if there is no next key. If the size of the next key is larger than the buffer in out, the bytes of the key that fit the buffer are written to out and any extra byte that doesn't fit is discarded.

-

Some notes:

- -
(func $ext_hashing_keccak_256_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_keccak_512_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_sha2_256_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_blake2_128_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_blake2_256_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_twox_64_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_twox_128_version_2
-    (param $data i64) (param $out i32))
-(func $ext_hashing_twox_256_version_2
-    (param $data i64) (param $out i32))
-(func $ext_trie_blake2_256_root_version_3
-    (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_blake2_256_ordered_root_version_3
-    (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_keccak_256_root_version_3
-    (param $data i64) (param $version i32) (param $out i32))
-(func $ext_trie_keccak_256_ordered_root_version_3
-    (param $data i64) (param $version i32) (param $out i32))
-(func $ext_default_child_storage_root_version_3
-    (param $child_storage_key i64) (param $out i32))
-(func $ext_crypto_ed25519_generate_version_2
-    (param $key_type_id i32) (param $seed i64) (param $out i32))
-(func $ext_crypto_sr25519_generate_version_2
-    (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
-(func $ext_crypto_ecdsa_generate_version_2
-    (param $key_type_id i32) (param $seed i64) (param $out i32) (return i32))
-
-

The behaviour of these functions is identical to their version 1 or version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of these functions accepts an out parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

-
(func $ext_default_child_storage_root_version_3
-    (param $child_storage_key i64) (param $out i32))
-(func $ext_storage_root_version_3
-    (param $out i32))
-
-

The behaviour of these functions is identical to their version 1 and version 2 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new versions of these functions accepts an out parameter containing the memory location where the host writes the output. The output is always of a size known at compilation time. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

-

I have taken the liberty to take the version 1 of these functions as a base rather than the version 2, as a PPP deprecating the version 2 of these functions has previously been accepted: https://github.com/w3f/PPPs/pull/6.

-
(func $ext_storage_clear_prefix_version_3
-    (param $prefix i64) (param $limit i64) (param $removed_count_out i32)
-    (return i32))
-(func $ext_default_child_storage_clear_prefix_version_3
-    (param $child_storage_key i64) (param $prefix i64)
-    (param $limit i64)  (param $removed_count_out i32) (return i32))
-(func $ext_default_child_storage_kill_version_4
-    (param $child_storage_key i64) (param $limit i64)
-    (param $removed_count_out i32) (return i32))
-
-

The behaviour of these functions is identical to their version 2 and 3 counterparts. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the version 3 and 4 of these functions accepts a removed_count_out parameter containing the memory location to a 8 bytes buffer where the host writes the number of keys that were removed in little endian. The runtime execution stops with an error if removed_count_out is outside of the range of the memory of the virtual machine. The functions return 1 to indicate that there are keys remaining, and 0 to indicate that all keys have been removed.

-

Note that there is an alternative proposal to add new host functions with the same names: https://github.com/w3f/PPPs/pull/7. This alternative doesn't conflict with this one except for the version number. One proposal or the other will have to use versions 4 and 5 rather than 3 and 4.

-
(func $ext_crypto_ed25519_sign_version_2
-    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-(func $ext_crypto_sr25519_sign_version_2
-    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-func $ext_crypto_ecdsa_sign_version_2
-    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i32))
-(func $ext_crypto_ecdsa_sign_prehashed_version_2
-    (param $key_type_id i32) (param $key i32) (param $msg i64) (param $out i32) (return i64))
-
-

The behaviour of these functions is identical to their version 1 counterparts. The new versions of these functions accept an out parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. The signatures are always of a size known at compilation time. On success, these functions return 0. If the public key can't be found in the keystore, these functions return 1 and do not write anything to out.

-

Note that the return value is 0 on success and 1 on failure, while the previous version of these functions write 1 on success (as it represents a SCALE-encoded Some) and 0 on failure (as it represents a SCALE-encoded None). Returning 0 on success and non-zero on failure is consistent with common practices in the C programming language and is less surprising than the opposite.

-
(func $ext_crypto_secp256k1_ecdsa_recover_version_3
-    (param $sig i32) (param $msg i32) (param $out i32) (return i64))
-(func $ext_crypto_secp256k1_ecdsa_recover_compressed_version_3
-    (param $sig i32) (param $msg i32) (param $out i32) (return i64))
-
-

The behaviour of these functions is identical to their version 2 counterparts. The new versions of these functions accept an out parameter containing the memory location where the host writes the signature. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out. The signatures are always of a size known at compilation time. On success, these functions return 0. On failure, these functions return a non-zero value and do not write anything to out.

-

The non-zero value written on failure is:

- -

These values are equal to the values returned on error by the version 2 (see https://spec.polkadot.network/chap-host-api#defn-ecdsa-verify-error), but incremented by 1 in order to reserve 0 for success.

-
(func $ext_crypto_ed25519_num_public_keys_version_1
-    (param $key_type_id i32) (return i32))
-(func $ext_crypto_ed25519_public_key_version_2
-    (param $key_type_id i32) (param $key_index i32) (param $out i32))
-(func $ext_crypto_sr25519_num_public_keys_version_1
-    (param $key_type_id i32) (return i32))
-(func $ext_crypto_sr25519_public_key_version_2
-    (param $key_type_id i32) (param $key_index i32) (param $out i32))
-(func $ext_crypto_ecdsa_num_public_keys_version_1
-    (param $key_type_id i32) (return i32))
-(func $ext_crypto_ecdsa_public_key_version_2
-    (param $key_type_id i32) (param $key_index i32) (param $out i32))
-
-

The functions superceded the ext_crypto_ed25519_public_key_version_1, ext_crypto_sr25519_public_key_version_1, and ext_crypto_ecdsa_public_key_version_1 host functions.

-

Instead of calling ext_crypto_ed25519_public_key_version_1 in order to obtain the list of all keys at once, the runtime should instead call ext_crypto_ed25519_num_public_keys_version_1 in order to obtain the number of public keys available, then ext_crypto_ed25519_public_key_version_2 repeatedly. -The ext_crypto_ed25519_public_key_version_2 function writes the public key of the given key_index to the memory location designated by out. The key_index must be between 0 (included) and n (excluded), where n is the value returned by ext_crypto_ed25519_num_public_keys_version_1. Execution must trap if n is out of range.

-

The same explanations apply for ext_crypto_sr25519_public_key_version_1 and ext_crypto_ecdsa_public_key_version_1.

-

Host implementers should be aware that the list of public keys (including their ordering) must not change while the runtime is running. This is most likely done by copying the list of all available keys either at the start of the execution or the first time the list is accessed.

-
(func $ext_offchain_http_request_start_version_2
-  (param $method i64) (param $uri i64) (param $meta i64) (result i32))
-
-

The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the request identifier in it, and returning a pointer to it, the version 2 of this function simply returns the newly-assigned identifier to the HTTP request. On failure, this function returns -1. An identifier of -1 is invalid and is reserved to indicate failure.

-
(func $ext_offchain_http_request_write_body_version_2
-  (param $method i64) (param $uri i64) (param $meta i64) (result i32))
-(func $ext_offchain_http_response_read_body_version_2
-  (param $request_id i32) (param $buffer i64) (param $deadline i64) (result i64))
-
-

The behaviour of these functions is identical to their version 1 counterpart. Instead of allocating a buffer, writing two bytes in it, and returning a pointer to it, the new version of these functions simply indicates what happened:

- -

These values are equal to the values returned on error by the version 1 (see https://spec.polkadot.network/chap-host-api#defn-http-error), but tweaked in order to reserve positive numbers for success.

-

When it comes to ext_offchain_http_response_read_body_version_2, the host implementers must not read too much data at once in order to not create ambiguity in the returned value. Given that the size of the buffer is always inferior or equal to 4 GiB, this is not a problem.

-
(func $ext_offchain_http_response_wait_version_2
-    (param $ids i64) (param $deadline i64) (param $out i32))
-
-

The behaviour of this function is identical to its version 1 counterpart. Instead of allocating a buffer, writing the output to it, and returning a pointer to it, the new version of this function accepts an out parameter containing the memory location where the host writes the output. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

-

The encoding of the response code is also modified compared to its version 1 counterpart and each response code now encodes to 4 little endian bytes as described below:

- -

The buffer passed to out must always have a size of 4 * n where n is the number of elements in the ids.

-
(func $ext_offchain_http_response_header_name_version_1
-    (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
-(func $ext_offchain_http_response_header_value_version_1
-    (param $request_id i32) (param $header_index i32) (param $out i64) (result i64))
-
-

These functions supercede the ext_offchain_http_response_headers_version_1 host function.

-

Contrary to ext_offchain_http_response_headers_version_1, only one header indicated by header_index can be read at a time. Instead of calling ext_offchain_http_response_headers_version_1 once, the runtime should call ext_offchain_http_response_header_name_version_1 and ext_offchain_http_response_header_value_version_1 multiple times with an increasing header_index, until a value of -1 is returned.

-

These functions accept an out parameter containing a pointer-size to the memory location where the header name or value should be written. The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine, even if the function wouldn't write anything to out.

-

These functions return the size, in bytes, of the header name or header value. If request doesn't exist or is in an invalid state (as documented for ext_offchain_http_response_headers_version_1) or the header_index is out of range, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

-

If the buffer in out is too small to fit the entire header name of value, only the bytes that fit are written and the rest are discarded.

-
(func $ext_offchain_submit_transaction_version_2
-    (param $data i64) (return i32))
-(func $ext_offchain_http_request_add_header_version_2
-    (param $request_id i32) (param $name i64) (param $value i64) (result i32))
-
-

Instead of allocating a buffer, writing 1 or 0 in it, and returning a pointer to it, the version 2 of these functions return 0 or 1, where 0 indicates success and 1 indicates failure. The runtime must interpret any non-0 value as failure, but the client must always return 1 in case of failure.

-
(func $ext_offchain_local_storage_read_version_1
-    (param $kind i32) (param $key i64) (param $value_out i64) (param $offset i32) (result i64))
-
-

This function supercedes the ext_offchain_local_storage_get_version_1 host function, and uses an API and logic similar to ext_storage_read_version_2.

-

It reads the offchain local storage key indicated by kind and key starting at the byte indicated by offset, and writes the value to the pointer-size indicated by value_out.

-

The function returns the number of bytes that were written in the value_out buffer. If the entry doesn't exist, a value of -1 is returned. Given that the host must never write more bytes than the size of the buffer in value_out, and that the size of this buffer is expressed as a 32 bits number, a 64bits value of -1 is not ambiguous.

-

The runtime execution stops with an error if value_out is outside of the range of the memory of the virtual machine, even if the size of the buffer is 0 or if the amount of data to write would be 0 bytes.

-
(func $ext_offchain_network_peer_id_version_1
-    (param $out i64))
-
-

This function writes the PeerId of the local node to the memory location indicated by out. A PeerId is always 38 bytes long. -The runtime execution stops with an error if out is outside of the range of the memory of the virtual machine.

-
(func $ext_input_size_version_1
-    (return i64))
-(func $ext_input_read_version_1
-    (param $offset i64) (param $out i64))
-
-

When a runtime function is called, the host uses the allocator to allocate memory within the runtime where to write some input data. These two new host functions provide an alternative way to access the input that doesn't make use of the allocator.

-

The ext_input_size_version_1 host function returns the size in bytes of the input data.

-

The ext_input_read_version_1 host function copies some data from the input data to the memory of the runtime. The offset parameter indicates the offset within the input data where to start copying, and must be inferior or equal to the value returned by ext_input_size_version_1. The out parameter is a pointer-size containing the buffer where to write to. -The runtime execution stops with an error if offset is strictly superior to the size of the input data, or if out is outside of the range of the memory of the virtual machine, even if the amount of data to copy would be 0 bytes.

-

Other changes

-

In addition to the new host functions, this RFC proposes two changes to the runtime-host interface:

+
  • Stakeholders
  • +
  • Explanation -

    All the host functions that are being superceded by new host functions are now considered deprecated and should no longer be used. -The following other host functions are similarly also considered deprecated:

    +
  • +
  • Discussion of Other Proposals -

    Drawbacks

    -

    This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.

    -

    Prior Art

    -

    The API of these new functions was heavily inspired by API used by the C programming language.

    -

    Unresolved Questions

    -

    The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.

    -

    It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:

    - -

    Future Possibilities

    -

    After this RFC, we can remove from the source code of the host the allocator altogether in a future version, by removing support for all the deprecated host functions. -This would remove the possibility to synchronize older blocks, which is probably controversial and requires a some preparations that are out of scope of this RFC.

    -

    (source)

    -

    Table of Contents

    - @@ -2200,16 +1953,16 @@

    AuthorsAurora Poppyseed, Just_Luuuu, Viki Val -

    Summary

    -

    This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating NFT collections. The objective is to lower the barrier to entry for artists, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.

    -

    Motivation

    -

    The current deposit of 10 DOT for collection creation on the Polkadot Asset Hub presents a significant financial barrier for many artists. By lowering the deposit requirements, we aim to encourage more artists to participate in the Polkadot NFT ecosystem, thereby enriching the diversity and vibrancy of the community and its offerings.

    +

    Summary

    +

    This RFC proposes changing the current deposit requirements on the Polkadot and Kusama Asset Hub for creating NFT collection, minting an individual NFT, and lowering it's coresponding metadata and attribute deposit. The objective is to lower the barrier to entry for NFT creators, fostering a more inclusive and vibrant ecosystem while maintaining network integrity and preventing spam.

    +

    Motivation

    +

    The current deposit of 10 DOT for collection creation (along with 0.01 DOT for item deposit and 0.2 DOT for metadata and attribute deposit) on the Polkadot Asset Hub and 0.1 KSM on Kusama Asset Hub presents a significant financial barrier for many NFT creators. By lowering the deposit requirements, we aim to encourage more NFT creators to participate in the Polkadot NFT ecosystem, thereby enriching the diversity and vibrancy of the community and its offerings.

    The actual implementation of the deposit is an arbitrary number coming from Uniques pallet. It is not a result of any economic analysis. This proposal aims to adjust the deposit from constant to dynamic pricing based on the deposit function with respect to stakeholders.

    Requirements

    -

    Stakeholders

    +

    Stakeholders

    Previous discussions have been held within the Polkadot Forum community and with artists expressing their concerns about the deposit amounts. Link.

    -

    Explanation

    +

    Explanation

    +

    This RFC proposes a revision of the deposit constants in the nfts pallet on the Polkadot Asset Hub. The new deposit amounts would be determined by a standard deposit formula.

    This RFC suggests modifying deposit constants defined in the nfts pallet on the Polkadot Asset Hub to require a lower deposit. The reduced deposit amount should be determined by the deposit adjusted by the pricing mechanism (arbitrary number/another pricing function).

    -

    Current deposit requirements are as follows:

    +

    Current code structure

    +

    Current deposit requirements are as follows

    +

    Looking at the current code structure the currently implemented we can find that the pricing re-uses the logic of how Uniques are defined:

    #![allow(unused)]
     fn main() {
     parameter_types! {
    @@ -2230,9 +1986,11 @@ 

    Explanation pub const NftsAttributeDepositBase: Balance = UniquesAttributeDepositBase::get(); pub const NftsDepositPerByte: Balance = UniquesDepositPerByte::get(); } - -// -parameter_types! { +}

    +

    In the existing setup, the Uniques are defined with specific deposit values for different elements:

    +
    #![allow(unused)]
    +fn main() {
    +parameter_types! {
     	pub const UniquesCollectionDeposit: Balance = UNITS / 10; // 1 / 10 UNIT deposit to create a collection
     	pub const UniquesItemDeposit: Balance = UNITS / 1_000; // 1 / 1000 UNIT deposit to mint an item
     	pub const UniquesMetadataDepositBase: Balance = deposit(1, 129);
    @@ -2240,7 +1998,38 @@ 

    Explanation pub const UniquesDepositPerByte: Balance = deposit(0, 1); } }

    -

    The proposed change would modify the deposit constants to require a lower deposit. The reduced deposit amount should be determined by deposit adjusted by an arbitrary number.

    +

    As we can see in the code definition above the current code does not use the deposit funtion when the collection in the following instances: UniquesCollectionDeposit and UniquesItemDeposit.

    +

    Proposed Modification Using the Deposit Function

    +

    This proposed modification adjusts the deposits to use the deposit function instead of using an arbitrary number.

    +
    #![allow(unused)]
    +fn main() {
    +parameter_types! {
    +	pub const NftsCollectionDeposit: Balance = deposit(1, 130);
    +	pub const NftsItemDeposit: Balance = deposit(1, 164);
    +	pub const NftsMetadataDepositBase: Balance = deposit(1, 129);
    +	pub const NftsAttributeDepositBase: Balance = deposit(1, 0);
    +	pub const NftsDepositPerByte: Balance = deposit(0, 1);
    +}
    +}
    +

    Calculations viewed bellow were calculated by using the following repository rfc-pricing. +Polkadot +| Name | Current price implementation | Proposed Modified by using the new deposit function | +|---------------------------|----------------------------------|-------------------------| +| collectionDeposit | 10 DOT | 0.20064 DOT | +| itemDeposit | 0.01 DOT | 0.20081 DOT | +| metadataDepositBase | 0.20129 DOT | 0.20076 DOT | +| attributeDepositBase | 0.2 DOT | 0.2 DOT |

    +

    Similarly the prices for Kusama ecosystem were calculated as: +Kusama: +| Name | Current price implementation | Proposed Price in KSM | +|---------------------------|----------------------------------|---------------------------| +| collectionDeposit | 0.1 KSM | 0.006688 KSM | +| itemDeposit | 0.001 KSM | 0.000167 KSM | +| metadataDepositBase | 0.006709666617 KSM | 0.0006709666617 KSM | +| attributeDepositBase | 0.00666666666 KSM | 0.000666666666 KSM |

    +

    Enhanced Approach to Further Lower Barriers for Entry

    +

    In an effort to further lower barriers to entry and foster greater inclusivity, we propose additional modifications to the pricing structure. These proposed reductions are based on a collaborative and calculated approach, involving the consensus of leading NFT creators within the Polkadot and Kusama Asset Hub communities. The adjustments to deposit amounts are not made arbitrarily. Instead, they are the result of detailed discussions and analyses conducted with prominent NFT creators.

    +

    Proposed Code Adjustments

    #![allow(unused)]
     fn main() {
     parameter_types! {
    @@ -2251,68 +2040,76 @@ 

    Explanation pub const NftsDepositPerByte: Balance = deposit(0, 1); } }

    -

    Prices and Proposed Prices on Polkadot Asset Hub: -Scroll right

    -
    | **Name**                  | **Current price implementation** | **Price if DOT = 5$**  | **Price if DOT goes to 50$**  | **Proposed Price in DOT** | **Proposed Price if DOT = 5$**   | **Proposed Price if DOT goes to 50$**|
    -|---------------------------|----------------------------------|------------------------|-------------------------------|---------------------------|----------------------------------|--------------------------------------|
    -| collectionDeposit         | 10 DOT                           | 50 $                   | 500 $                         | 0.20064 DOT                   | ~1 $                            | 10.32$                                   |
    -| itemDeposit               | 0.01 DOT                         | 0.05 $                 | 0.5 $                         | 0.005 DOT                 | 0.025 $                          | 0.251$                                |
    -| metadataDepositBase       | 0.20129 DOT                      | 1.00645 $              | 10.0645 $                     | 0.0020129 DOT             | 0.0100645 $                      | 0.100645$                            |
    -| attributeDepositBase      | 0.2 DOT                          | 1 $                    | 10 $                          | 0.002 DOT                 | 0.01 $                           | 0.1$                                 |
    -
    -

    Prices and Proposed Prices on Kusama Asset Hub: -Scroll right

    -
    | **Name**                  | **Current price implementation** | **Price if KSM = 23$** | **Price if KSM goes to 500$** | **Proposed Price in KSM** | **Proposed Price if KSM = 23$**  | **Proposed Price if KSM goes to 500$** |
    -|---------------------------|----------------------------------|------------------------|-------------------------------|---------------------------|----------------------------------|----------------------------------------|
    -| collectionDeposit         | 0.1 KSM                          | 2.3 $                  | 50 $                          | 0.006688 KSM                  | 0.154 $                           | 3.34 $                                    |
    -| itemDeposit               | 0.001 KSM                        | 0.023 $                | 0.5 $                         | 0.000167 KSM                | 0.00385 $                         | 0.0835 $                                 |
    -| metadataDepositBase       | 0.006709666617 KSM               | 0.15432183319 $        | 3.3548333085 $                | 0.0006709666617 KSM       | 0.015432183319 $                 | 0.33548333085 $                        |
    -| attributeDepositBase      | 0.00666666666 KSM                | 0.15333333318 $        | 3.333333333 $                 | 0.000666666666 KSM        | 0.015333333318 $                 | 0.3333333333 $                         |
    -
    -
    -
    -

    Note: This is only a proposal for change and can be modified upon additional conversation.

    -
    -

    Drawbacks

    +

    Prices and Proposed Prices on Polkadot Asset Hub:

    +

    Polkadot +| Name | Current price implementation | Proposed Prices | +|---------------------------|----------------------------------|---------------------| +| collectionDeposit | 10 DOT | 0.20064 DOT | +| itemDeposit | 0.01 DOT | 0.005 DOT | +| metadataDepositBase | 0.20129 DOT | 0.002 DOT | +| attributeDepositBase | 0.2 DOT | 0.002 DOT |

    +

    Kusama +| Name | Current price implementation | Proposed Price in KSM | +|---------------------------|----------------------------------|---------------------------| +| collectionDeposit | 0.1 KSM | 0.006688 KSM | +| itemDeposit | 0.001 KSM | 0.000167 KSM | +| metadataDepositBase | 0.006709666617 KSM | 0.0006709666617 KSM | +| attributeDepositBase | 0.00666666666 KSM | 0.000666666666 KSM |

    +

    Discussion of Other Proposals

    +

    Several innovative proposals have been considered to enhance the network's adaptability and manage deposit requirements more effectively:

    +

    Enhanced Weak Governance Origin Model

    +

    The concept of a weak governance origin, controlled by a consortium like the System Collective, has been proposed. This model would allow for dynamic adjustments of NFT deposit requirements in response to market conditions, adhering to storage deposit norms.

    +

    Enhancements and Concerns:

    + +

    Function-Based Pricing Model

    +

    Another proposal is to use a mathematical function to regulate deposit prices, initially allowing low prices to encourage participation, followed by a gradual increase to prevent network bloat.

    +

    Refinements:

    + +

    Linking Deposit to USD(x) Value

    +

    This approach suggests pegging the deposit value to a stable currency like the USD, introducing predictability and stability for network users.

    +

    Considerations and Challenges:

    + +

    Each of these proposals offers unique advantages and challenges. The optimal approach may involve a combination of these ideas, carefully adjusted to address the specific needs and dynamics of the Polkadot and Kusama networks.

    +

    Drawbacks

    Modifying deposit requirements necessitates a balanced assessment of the potential drawbacks. Highlighted below are cogent points extracted from the discourse on the Polkadot Forum conversation, which provide critical perspectives on the implications of such changes:

    -
    -

    But NFT deposits were chosen somewhat arbitrarily at genesis and it’s a good exercise to re-evaluate them and adapt if they are causing pain and if lowering them has little or no negative side effect (or if the trade-off is worth it). --> joepetrowski

    -
    -
    -

    Underestimates mean that state grows faster, although not unbounded - effectively an economic subsidy on activity. Overestimates mean that the state grows slower - effectively an economic depressant on activity. --> rphmeier

    -
    -
    -

    Technical: We want to prevent state bloat, therefore using state should have a cost associated with it. --> joepetrowski

    -
    +

    The discourse around modifying deposit requirements includes various perspectives:

    +

    Adjusting NFT deposit requirements on Polkadot and Kusama Asset Hubs involves key challenges:

    +
      +
    1. +

      State Growth and Technical Concerns: Lowering deposit requirements can lead to increased blockchain state size, potentially causing state bloat. This growth needs to be managed to prevent strain on the network's resources and maintain operational efficiency.

      +
    2. +
    3. +

      Network Security and Market Response: Reduced deposits might increase transaction volume, potentially bloating the state, thereby impacting network security. Additionally, adapting to the cryptocurrency market's volatility is crucial. The mechanism for setting deposit amounts must be responsive yet stable, avoiding undue complexity for users.

      +
    4. +
    5. +

      Economic Impact on Previous Stakeholders: The change could have varied economic effects on previous (before the change) creators, platform operators, and investors. Balancing these interests is essential to ensure the adjustment benefits the ecosystem without negatively impacting its value dynamics. However in the particular case of Polkadot and Kusama Asset Hub this does not pose a concern since there are very few collections currently and thus previous stakeholders wouldn't be much affected. As of date 9th January 2024 there are 42 collections on Polkadot Asset Hub and 191 on Kusama Asset Hub with a relatively low volume.

      +
    6. +

    Testing, Security, and Privacy

    -

    The change is backwards compatible. The prevention of "spam" could be prevented by OpenGov proposal to forceDestoy list of collections that are not suitable.

    +

    Security concerns

    +

    The prevention of "spam" could be prevented by OpenGov proposal to forceDestoy list of collections that are not suitable.

    Performance, Ergonomics, and Compatibility

    Performance

    -

    This change is not expected to have a significant impact on the overall performance of the Polkadot Asset Hub. However, monitoring the network closely, especially in the initial stages after implementation, is crucial to identify and mitigate any potential issues.

    -

    Additionally, a supplementary proposal aims to augment the network's adaptability:

    -
    -

    Just from a technical perspective; I think the best we can do is to use a weak governance origin that is controlled by some consortium (ie. System Collective). -This origin could then update the NFT deposits any time the market conditions warrant it - obviously while honoring the storage deposit requirements. -To implement this, we need RFC#12 and the Parameters pallet from @xlc. --> OliverTY

    -
    -

    This dynamic governance approach would facilitate a responsive and agile economic model for deposit management, ensuring that the network remains accessible and robust in the face of market volatility.

    +

    The primary performance consideration stems from the potential for state bloat due to increased activity from lower deposit requirements. It's vital to monitor and manage this to avoid any negative impact on the chain's performance. Strategies for mitigating state bloat, including efficient data management and periodic reviews of storage requirements, will be essential.

    Ergonomics

    -

    The proposed change aims to enhance the user experience for artists, making Polkadot more accessible and user-friendly.

    +

    The proposed change aims to enhance the user experience for artists, traders and utilizers of Kusama and Polkadot asset hub. Making Polkadot and Kusama more accessible and user-friendly.

    Compatibility

    The change does not impact compatibility as redeposit function is already implemented.

    -

    Unresolved Questions

    - +

    Unresolved Questions

    +

    There remain unresolved questions regarding the implementation of a function-based pricing model for deposits and the feasibility of linking deposits to a USD(x) value. These aspects require further exploration and discussion to ascertain their viability and potential impact on the ecosystem.

    - -

    If accepted, this RFC could pave the way for further discussions and proposals aimed at enhancing the inclusivity and accessibility of the Polkadot ecosystem. Future work could also explore having a weak governance origin for deposits as proposed by Oliver.

    +

    We recommend initially lowering the deposit to the suggested levels. Subsequently, based on the outcomes and feedback, we can continue discussions on more complex models such as function-based pricing or currency-linked deposits.

    +

    If accepted, this RFC could pave the way for further discussions and proposals aimed at enhancing the inclusivity and accessibility of the Polkadot ecosystem.

    (source)

    Table of Contents

    -

    Stakeholders

    +

    Stakeholders

    All chain teams are stakeholders, as implementing this feature would require timely effort on their side and would impact compatibility with older tools.

    This feature is essential for all offline signer tools; many regular signing tools might make use of it. In general, this RFC greatly improves security of any network implementing it, as many governing keys are used with offline signers.

    Implementing this RFC would remove requirement to maintain metadata portals manually, as task of metadata verification would be effectively moved to consensus mechanism of the chain.

    -

    Explanation

    +

    Explanation

    Detailed description of metadata shortening and digest process is provided in metadata-shortener crate (see cargo doc --open and examples). Below are presented algorithms of the process.

    Definitions

    Metadata structure

    @@ -3920,7 +3717,7 @@

    Chain v 0x02 - 0xFFreservedreserved for future use -

    Drawbacks

    +

    Drawbacks

    Increased transaction size

    A 1-byte increase in transaction size due to signed extension value. Digest is not included in transferred transaction, only in signing process.

    Transition overhead

    @@ -3937,7 +3734,7 @@

    Compatibili

    Proposal in this form is not compatible with older tools that do not implement proper MetadataV14 self-descriptive features; those would have to be upgraded to include a new signed extensions field.

    Prior Art and References

    This project was developed upon a Polkadot Treasury grant; relevant development links are located in metadata-offline-project repository.

    -

    Unresolved Questions

    +

    Unresolved Questions

    1. How would polkadot-js handle the transition?
    2. Where would non-rust tools like Ledger apps get shortened metadata content?
    3. @@ -3985,11 +3782,11 @@

      Summary

      +

      Summary

      Propose a way of permuting the availability chunk indices assigned to validators, in the context of recovering available data from systematic chunks, with the purpose of fairly distributing network bandwidth usage.

      -

      Motivation

      +

      Motivation

      Currently, the ValidatorIndex is always identical to the ChunkIndex. Since the validator array is only shuffled once per session, naively using the ValidatorIndex as the ChunkIndex would pose an unreasonable stress on the first N/3 validators during an entire session, when favouring availability recovery from systematic chunks.

      @@ -3997,9 +3794,9 @@

      Motivation -

      Stakeholders

      +

      Stakeholders

      Relay chain node core developers.

      -

      Explanation

      +

      Explanation

      Systematic erasure codes

      An erasure coding algorithm is considered systematic if it preserves the original unencoded data as part of the resulting code. @@ -4062,7 +3859,6 @@

      Proposed pub fn get_chunk_index( n_validators: u32, validator_index: ValidatorIndex, - block_number: BlockNumber, core_index: CoreIndex ) -> ChunkIndex { let threshold = systematic_threshold(n_validators); // Roughly n_validators/3 @@ -4154,7 +3950,7 @@

      Configuration::set_node_feature extrinsic. Once the feature is enabled and new configuration is live, the validator->chunk mapping ceases to be a 1:1 mapping and systematic recovery may begin.

      -

      Drawbacks

      +

      Drawbacks

      • Getting access to the core_index that used to be occupied by a candidate in some parts of the dispute protocol is very complicated (See appendix A). This RFC assumes that availability-recovery processes initiated during @@ -4183,7 +3979,7 @@

        Compatibili

        Prior Art and References

        See comments on the tracking issue and the in-progress PR

        -

        Unresolved Questions

        +

        Unresolved Questions

        Not applicable.

        This enables future optimisations for the performance of availability recovery, such as retrieving batched systematic @@ -4267,20 +4063,20 @@

        Summary

        +

        Summary

        This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

        Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

        The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

        -

        Motivation

        +

        Motivation

        The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recently blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

        It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

        If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

        This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

        -

        Stakeholders

        +

        Stakeholders

        Low-level client developers. People interested in accessing the archive of the chain.

        -

        Explanation

        +

        Explanation

        Reading RFC #8 first might help with comprehension, as this RFC is very similar.

        Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

        Capabilities

        @@ -4315,7 +4111,7 @@

        Drawbacks

        +

        Drawbacks

        None that I can see.

        Testing, Security, and Privacy

        The content of this section is basically the same as the one in RFC 8.

        @@ -4336,7 +4132,7 @@

        Compatibili

        Irrelevant.

        Prior Art and References

        Unknown.

        -

        Unresolved Questions

        +

        Unresolved Questions

        While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

        This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

        @@ -4378,19 +4174,19 @@

        Summary

        +

        Summary

        Currently, substrate runtime use an simple allocator defined by host side. Every runtime MUST import these allocator functions for normal execution. This situation make runtime code not versatile enough.

        So this RFC proposes to define a new spec for allocator part to make substrate runtime more generic.

        -

        Motivation

        +

        Motivation

        Since this RFC define a new way for allocator, we now regard the old one as legacy allocator. As we all know, since the allocator implementation details are defined by the substrate client, parachain/parathread cannot customize memory allocator algorithm, so the new specification allows the runtime to customize memory allocation, and then export the allocator function according to the specification for the client side to use. Another benefit is that some new host functions can be designed without allocating memory on the client, which may have potential performance improvements. Also it will help provide a unified and clean specification if substrate runtime support multi-targets(e.g. RISC-V). There is also a potential benefit. Many programming languages that support compilation to wasm may not be friendly to supporting external allocator. This is beneficial for other programming languages ​​to enter the substrate runtime ecosystem. The last and most important benefit is that for offchain context execution, the runtime can fully support pure wasm. What this means here is that all imported host functions could not actually be called (as stub functions), then the various verification logic of the runtime can be converted into pure wasm, which provides the possibility for the substrate runtime to run block verification in other environments (such as in browsers and other non-substrate environments).

        -

        Stakeholders

        +

        Stakeholders

        No attempt was made at convincing stakeholders.

        -

        Explanation

        +

        Explanation

        Runtime side spec

        This section contains a list of functions should be exported by substrate runtime.

        We define the spec as version 1, so the following dummy function v1 MUST be exported to hint @@ -4429,7 +4225,7 @@

        Client side allocator.

      Detail-heavy explanation of the RFC, suitable for explanation to an implementer of the changeset. This should address corner cases in detail and provide justification behind decisions, and provide rationale for how the design meets the solution requirements.

      -

      Drawbacks

      +

      Drawbacks

      The allocator inside of the runtime will make code size bigger, but it's not obvious. The allocator inside of the runtime maybe slow down(or speed up) the runtime, still not obvious.

      We could ignore these drawbacks since they are not prominent. And the execution efficiency is highly decided by runtime developer. We could not prevent a poor efficiency if developer want to do it.

      @@ -4452,7 +4248,7 @@

      Move the allocator inside of the runtime
    4. Add new allocator design
    5. -

      Unresolved Questions

      +

      Unresolved Questions

      None at this time.

      The content discussed with RFC-0004 is basically orthogonal, but it could still be considered together, and it is preferred that this rfc be implmentented first.

      @@ -4487,17 +4283,17 @@

      AuthorsSourabh Niyogi -

      Summary

      +

      Summary

      This RFC proposes lowering the existential deposit requirements on Asset Hub for Polkadot by a factor of 25, from 0.1 DOT to .004 DOT. The objective is to lower the barrier to entry for asset minters to mint a new asset to the entire DOT token holder base, and make Asset Hub on Polkadot a place where everyone can do small asset conversions.

      -

      Motivation

      +

      Motivation

      The current Existential deposit is 0.1 DOT on Asset Hub for Polkadot. While this is not does not appear to be a significant financial barrier for most people (only $0.80), this value makes Asset Hub impractical for Asset Hub Minters, specifically for the case where the Asset Hub Minters wishes to mint a new asset for the entire community of DOT holders (e.g. 1.25MM DOT holders would cost 125K DOT @ $8 = $1MM).

      By lowering the existential deposit requirements from 0.1 DOT to 0.004 DOT, the cost of minting to the entire community of DOT holders goes from an unmanagable number [125K DOT, the value of several houses circa December 2023] down to a manageable number [5K DOT, the value of a car circa December 2023].

      -

      Stakeholders

      +

      Stakeholders

      -

      Explanation

      +

      Explanation

      The exact amount of the existential deposit (ED) is proposed to be 0.004 DOT based on