You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the performance of the completer-lstm drops approx. linearly with the length of the prefix string. This problem could be circumnavigated by caching the last LSTM-State-Tuple-List on the client side, and feeding it to the server when an extended completion is requested.
Furthermore, a significant share in walltime is occupied by JSON Serialization/Transmission/Deserialization of the M*N*(char, probability) prediction matrix, where N is the number of chars to predict and M is the total number of lexical features. To reduce the size of matrix to significant entries, M should only cover a top portion of the (5?) most probable next characters.
The text was updated successfully, but these errors were encountered:
Currently, the performance of the completer-lstm drops approx. linearly with the length of the prefix string. This problem could be circumnavigated by caching the last LSTM-State-Tuple-List on the client side, and feeding it to the server when an extended completion is requested.
Furthermore, a significant share in walltime is occupied by JSON Serialization/Transmission/Deserialization of the M*N*(char, probability) prediction matrix, where N is the number of chars to predict and M is the total number of lexical features. To reduce the size of matrix to significant entries, M should only cover a top portion of the (5?) most probable next characters.
The text was updated successfully, but these errors were encountered: