|
|
@ -26,15 +26,11 @@ label selector: paddle-job-pserver=deepspeech20170727130129, desired: 1
|
|
|
|
running pod list: [('Running', '10.1.3.6')]
|
|
|
|
running pod list: [('Running', '10.1.3.6')]
|
|
|
|
label selector: paddle-job=deepspeech20170727130129, desired: 1
|
|
|
|
label selector: paddle-job=deepspeech20170727130129, desired: 1
|
|
|
|
running pod list: [('Running', '10.1.83.14')]
|
|
|
|
running pod list: [('Running', '10.1.83.14')]
|
|
|
|
Starting training job: /pfs/dlnel/home/yanxu05@baidu.com/jobs/deepspeech20170727130129, num_gradient_servers: 1, trainer_id: 0, version: v2
|
|
|
|
Starting training job: /pfs/dlnel/home/****@baidu.com/jobs/deepspeech20170727130129, num_gradient_servers: 1, trainer_id: 0, version: v2
|
|
|
|
I0727 05:01:42.969719 25 Util.cpp:166] commandline: --num_gradient_servers=1 --ports_num_for_sparse=1 --use_gpu=1 --trainer_id=0 --pservers=10.1.3.6 --trainer_count=4 --num_passes=1 --ports_num=1 --port=7164
|
|
|
|
I0727 05:01:42.969719 25 Util.cpp:166] commandline: --num_gradient_servers=1 --ports_num_for_sparse=1 --use_gpu=1 --trainer_id=0 --pservers=10.1.3.6 --trainer_count=4 --num_passes=1 --ports_num=1 --port=7164
|
|
|
|
[INFO 2017-07-27 05:01:50,279 layers.py:2430] output for __conv_0__: c = 32, h = 81, w = 54, size = 139968
|
|
|
|
[INFO 2017-07-27 05:01:50,279 layers.py:2430] output for __conv_0__: c = 32, h = 81, w = 54, size = 139968
|
|
|
|
[WARNING 2017-07-27 05:01:50,280 layers.py:2789] brelu is not recommend for batch normalization's activation, maybe the relu is better
|
|
|
|
[WARNING 2017-07-27 05:01:50,280 layers.py:2789] brelu is not recommend for batch normalization's activation, maybe the relu is better
|
|
|
|
[INFO 2017-07-27 05:01:50,283 layers.py:2430] output for __conv_1__: c = 32, h = 41, w = 54, size = 70848
|
|
|
|
[INFO 2017-07-27 05:01:50,283 layers.py:2430] output for __conv_1__: c = 32, h = 41, w = 54, size = 70848
|
|
|
|
[WARNING 2017-07-27 05:01:50,283 layers.py:2789] brelu is not recommend for batch normalization's activation, maybe the relu is better
|
|
|
|
|
|
|
|
[WARNING 2017-07-27 05:01:50,287 layers.py:2789] is not recommend for batch normalization's activation, maybe the relu is better
|
|
|
|
|
|
|
|
[WARNING 2017-07-27 05:01:50,291 layers.py:2789] is not recommend for batch normalization's activation, maybe the relu is better
|
|
|
|
|
|
|
|
[WARNING 2017-07-27 05:01:50,295 layers.py:2789] is not recommend for batch normalization's activation, maybe the relu is better
|
|
|
|
|
|
|
|
I0727 05:01:50.316176 25 MultiGradientMachine.cpp:99] numLogicalDevices=1 numThreads=4 numDevices=4
|
|
|
|
I0727 05:01:50.316176 25 MultiGradientMachine.cpp:99] numLogicalDevices=1 numThreads=4 numDevices=4
|
|
|
|
I0727 05:01:50.454787 25 GradientMachine.cpp:85] Initing parameters..
|
|
|
|
I0727 05:01:50.454787 25 GradientMachine.cpp:85] Initing parameters..
|
|
|
|
I0727 05:01:50.690007 25 GradientMachine.cpp:92] Init parameters done.
|
|
|
|
I0727 05:01:50.690007 25 GradientMachine.cpp:92] Init parameters done.
|
|
|
|