Announcing TensorFlow Fold: Deep Learning With Dynamic Computation Graphs By Google's Moshe Looks, Marcello Herreshoff and DeLesley Hutchins, Software Engineers February 09, 2017
Today we are releasing TensorFlow Fold to address these challenges. TensorFlow Fold makes it easy to implement deep-learning models that operate over data of varying size and structure. Furthermore, TensorFlow Fold brings the benefits of batching to such models, resulting in a speedup of more than 10x on CPU, and more than 100x on GPU, over alternative implementations. This is made possible by dynamic batching, introduced in our paper
Deep Learning
with Dynamic Computation Graphs.
Because the individual inputs may have different sizes and structures, the computation graphs may as well. Dynamic batching then automatically combines these graphs to take advantage of opportunities for batching, both within and across inputs, and inserts additional instructions to move data between the batched operations (see our paper for technical details). To learn more, head over to our github site. We hope that TensorFlow Fold will be useful for researchers and practitioners implementing neural networks with dynamic computation graphs in TensorFlow. AcknowledgementsThis work was done under the supervision of Peter Norvig. |

**
Terms
of Use | Copyright
© 2002 - 2017
CONSTITUENTWORKS
^{SM }CORPORATION. All
rights reserved. |
Privacy Statement**