python – How to create a distributed train with the help of train_image_classifier.py in the tensorflow-slim API

I want to do a distributed train using train_image_classifier.py in the tensorflow-slim API. I have 2 machines that consist of each GPU. And Same O / S is Windows 10.
Tensorflow version 1. 12
use the tensorflow-slim API
CUDA 9.0
nuDuN 7.5

I've tried running the script "train_image_classifier.py" on my "PS" machine as

python train_image_classifier.py --train_dir = my home directory - dataset name = my dataset - dataset_split_name = train --dataset_dir = my home directory - model_name = inception_v3 - path_of_path = #: InceptionV3 / Logits - trainable_scopes = InceptionV3 / Logits - -max_number_of_steps = 10000 - batch_size = 16 --learning_rate = 0.01 --learning_rate_decay_type = fixed - save_interval_secs = 60 --save_summaries_secs = 60 --log_two. --weight_decay = 0.00004 - master = grpc: //192.168.0.13: 3001 --num_clones = 1 --worker_replicas = 2 --num_ps_tasks = 1 --task = 0 --sync_replicas = True

and run the same script on my machine "worker" as

python train_image_classifier.py --train_dir = my home directory - data name - my dataset - dataset_split_name = train - train --dataset_dir = my homepage InceptionV3 / Logits --trainable_scopes = InceptionV3 / Logits --max_number_of_steps = 10000 --batch_size = 16 --learning_rate = 0.01 --weight_decay = 0.00004 --master = grpc: //192.168.0.13: 3001 --num_clones = 1 --worker_replicas = 2 --num_ps_tasks = 1 --task = 0 --sync_replicas = True

but PS machine displaying this

enter the description of the image here

and the work machine displaying the same result.

So I tried to change the command to --master = grpc: //192.168.0.13: 3001 --num_clones = 1 --worker_replicas = 2 --num_ps_tasks = 1 --task = 0 --sync_replicas = False or task -> 1 etc ...

It does not work.



Does anybody do that successfully?
Can you explain me about it?
Is the script "train_image_classifier.py" able to train distributed on 2 machines?

I am various confused ..

Please help.