Skip to content

Latest commit

 

History

History
62 lines (40 loc) · 1.67 KB

DynamicEmbeddingOptimizer.md

File metadata and controls

62 lines (40 loc) · 1.67 KB

tfra.dynamic_embedding.DynamicEmbeddingOptimizer

View source on GitHub




An optimizer wrapper to make any TensorFlow optimizer capable of training

tfra.dynamic_embedding.DynamicEmbeddingOptimizer(
    bp_v2=(False),
    synchronous=(False)
)

Dynamic Embeddding Variables.

Args:

  • self: a TensorFlow optimizer.
  • bp_v2: If True, updating parameters will use updating instead of setting, which solves the race condition problem among workers during back-propagation in large-scale distributed asynchronous training. Reference: https://www.usenix.org/system/files/osdi20-jiang.pdf
  • synchronous: If True, we will use horovod's all-reduce method to merge the dense grad of model parameter, the default reduce method is SUM. For TrainableWrapper's grad, keep same with before.

Example usage:

optimizer = tfra.dynamic_embedding.DynamicEmbeddingOptimizer(
    tf.train.AdamOptimizer(0.001))

Returns:

The optimizer itself but has ability to train Dynamic Embedding Variables.