Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sanitize_token() #144

Closed
wibeasley opened this issue Mar 21, 2017 · 1 comment
Closed

sanitize_token() #144

wibeasley opened this issue Mar 21, 2017 · 1 comment

Comments

@wibeasley
Copy link
Member

wibeasley commented Mar 21, 2017

(@nutterb recommended an explicit token sanitizing function in #133.)

@nutterb, I like your idea for a token validation function. I'm fine with pulling out @haozhu233's code into an explicit function, and/or replacing the gsub() with substr(). Or maybe a compromise between regex & substrings like

sanitize_token <- function( token ){
  # Validate only 32-character hexadecimals, with an optional line ending.
  pattern <- "^([0-9A-F]{32})(?:\\n)?$")
  
  if( !grepl(token, pattern) )
    stop("The token is not a valid 32-character hexademical value.")
    
  sub(pattern, "\\1", token)
}

Once we have the explicit function, we could accommodate variations in future REDCap version. Like sprintf("^([0-9A-F]{%i})(?:\\n)?$", token_length) if it ever expands beyond 32 characters.

@nutterb
Copy link
Contributor

nutterb commented Mar 21, 2017

This seems reasonable, and also more proactive. If a token somehow comes up with something not alphanumeric, this will catch it before sending it to the server and probably give a more satisfying error message to the user.

wibeasley added a commit that referenced this issue Mar 24, 2017
wibeasley added a commit that referenced this issue Mar 24, 2017
wibeasley added a commit that referenced this issue Mar 25, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants