View source on GitHub |
An optimizer wrapper to make any TensorFlow optimizer capable of training
tfra.dynamic_embedding.DynamicEmbeddingOptimizer(
bp_v2=(False),
synchronous=(False)
)
Dynamic Embeddding Variables.
self
: a TensorFlow optimizer.bp_v2
: If True, updating parameters will use updating instead of setting, which solves the race condition problem among workers during back-propagation in large-scale distributed asynchronous training. Reference: https://www.usenix.org/system/files/osdi20-jiang.pdfsynchronous
: If True, we will use horovod's all-reduce method to merge the dense grad of model parameter, the default reduce method is SUM. For TrainableWrapper's grad, keep same with before.
optimizer = tfra.dynamic_embedding.DynamicEmbeddingOptimizer(
tf.train.AdamOptimizer(0.001))
The optimizer itself but has ability to train Dynamic Embedding Variables.