You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In a secure communication system, it is necessary at some point to verify keys.
Usually, this is represented as text key encoded as letters, hex, base32, base58 or maybe base64. Since a hash is a lot to deal with, often the hash is truncated to be easier for humans to process. (it's important not to truncate too much, or security will be lost https://evil32.com/ )
Humans have an approximately fixed capacity to remember things, and so when looking at a finger print, how many bits are they actually remembering? Since it seems the way humans work is they can remember (say) 7 things, but there is not a limit on the size of the set the 7 things are picked from. They could just as easily remember 7 animals as 7 digits.
Thus, to exand the security property of the user interface, if key fingerprints are displayed in a denser format (such as base emoji) then humans ought to be able to remember more bits, and thus have more secure fingerprints.
I want to test this in an experiment - i figure, a web page test subjects visit, and it associates keys with user names - then an attacker is simulated, where the subject must flag invalid keys that partially collide with their known keys.
The question is: does the detection success vary in proportion to encoding, and collision length?
In a secure communication system, it is necessary at some point to verify keys.
Usually, this is represented as text key encoded as letters,
hex
,base32
,base58
or maybebase64
. Since a hash is a lot to deal with, often the hash is truncated to be easier for humans to process. (it's important not to truncate too much, or security will be lost https://evil32.com/ )Humans have an approximately fixed capacity to remember things, and so when looking at a finger print, how many bits are they actually remembering? Since it seems the way humans work is they can remember (say) 7 things, but there is not a limit on the size of the set the 7 things are picked from. They could just as easily remember 7 animals as 7 digits.
Thus, to exand the security property of the user interface, if key fingerprints are displayed in a denser format (such as base emoji) then humans ought to be able to remember more bits, and thus have more secure fingerprints.
I want to test this in an experiment - i figure, a web page test subjects visit, and it associates keys with user names - then an attacker is simulated, where the subject must flag invalid keys that partially collide with their known keys.
The question is: does the detection success vary in proportion to encoding, and collision length?
@jbenet would love your ideas on this.
So far I have been unable to find any research on the usability of key fingerprints.
The text was updated successfully, but these errors were encountered: