Distributed Deep Learning training: Model and Data Parallelism in Tensorflow

How to train your data in multiple GPUs or machines using distributed methods such as mirrored strategy, parameter-server and central storage.