You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The experience
Having spoken to several users one reoccurring theme seems to be; not knowing when to reverse data when building scripts or supplying contract parameters. I've personally experienced this as well while contributing to the project and it is really time consuming. Based on my experience in creating a full node in Python I know there potentially only have to be 2 places where data should be reversed, being;
parsing a string to a UInt160/256 type
converting a UInt160/256 type to a string
My observation
I have no complete view of the code base nor am I a Javascript/Typescript expert, but it seems that the HexString class is the (legacy) solution for dealing with byte arrays and the lack of UInt160/UInt256 classes. I see an opportunity to get rid of the confusion by removing anything that says "reverse", "toBigEndian", "toLittleEndian", "isLittleEndian", anything that says <left_type2right_type>() (e.g. str2hexstring) or is a HexString.
My idea
Introduce UInt160 and UInt256 classes. In the old Python code we had the same order guessing problem and introducing these classes became a life saver in maintaining as well as using. Ontology seems to do the same.
Delete HexString and replace it with ArrayBuffer or some higher level view like DataView or Uint8Array. DataView has convenience functions like getUint32() and setUint32() which take away the need for constructions like HexString.fromHex(num2hexstring(value)).
My personal choice would actually be Node Buffer instead of ArrayBuffer because that has a more convenient interface for creating byte arrays -edit- because it automatically advances the pointer in the stream instead of having to track it yourself. -end edit- It also already has convenience constructors to read data from hex string (i.e Buffer.from("aabb", "hex")) and other types. The downside is that it seems to be a separate dependency.
Another advantage of using a proper byte array type is that in function implementations you don't have to think if size/length represents 1 byte or 1 ascii character. If you port new code from C# you know that 32 bytes means size 32 on the neon-js side. Not sometimes 32 and other times 64 because it's actually ascii.
args: [sc.ContractParam.byteArray(u.HexString.fromHex(nef.serialize(),true)),// set to little endiansc.ContractParam.byteArray(u.HexString.fromAscii(JSON.stringify(manifest.toJson())).reversed()// also make little endian??),],
I had a look and I was mistaken that Node's Buffer automatically advances the pointer. It's also poor in terms of building a script as it requires concatenating buffers into a new one for every push. I have not yet found a type or package that represents a growable array (something like a vector<uint8_t>) to build on top of. We'll probably have to do some kind of convenience wrapper ourselves.
The experience
Having spoken to several users one reoccurring theme seems to be; not knowing when to reverse data when building scripts or supplying contract parameters. I've personally experienced this as well while contributing to the project and it is really time consuming. Based on my experience in creating a full node in Python I know there potentially only have to be 2 places where data should be reversed, being;
My observation
I have no complete view of the code base nor am I a Javascript/Typescript expert, but it seems that the
HexString
class is the (legacy) solution for dealing with byte arrays and the lack of UInt160/UInt256 classes. I see an opportunity to get rid of the confusion by removing anything that says "reverse", "toBigEndian", "toLittleEndian", "isLittleEndian", anything that says<left_type2right_type>()
(e.g. str2hexstring) or is aHexString
.My idea
Introduce
UInt160
andUInt256
classes. In the old Python code we had the same order guessing problem and introducing these classes became a life saver in maintaining as well as using. Ontology seems to do the same.Delete
HexString
and replace it withArrayBuffer
or some higher level view likeDataView
orUint8Array
.DataView
has convenience functions like getUint32() and setUint32() which take away the need for constructions likeHexString.fromHex(num2hexstring(value))
.My personal choice would actually be Node Buffer instead of
ArrayBuffer
because that has a more convenient interface for creating byte arrays -edit-because it automatically advances the pointer in the stream instead of having to track it yourself. -end edit- It also already has convenience constructors to read data from hex string (i.eBuffer.from("aabb", "hex")
) and other types. The downside is that it seems to be a separate dependency.Another advantage of using a proper byte array type is that in function implementations you don't have to think if size/length represents 1 byte or 1 ascii character. If you port new code from C# you know that 32 bytes means size 32 on the
neon-js
side. Not sometimes 32 and other times 64 because it's actually ascii.Make a
Serializable
interface e.g.Then instead of having to guess this sequence
we could potentially see
or maybe
sc.ContractParam.ByteArray.fromBuffer()
andsc.ContractParam.ByteArray.fromString()
who knows.Ending words
It's a big undertaking but I think it can pay off well in user experience as well as maintenance. I'm Interested to hear thoughts
The text was updated successfully, but these errors were encountered: