Hui Zhang
|
e4ecfb22fd
|
format code
|
3 years ago |
Hui Zhang
|
3fa2e44e89
|
more interface, trigger, extension
|
3 years ago |
Hui Zhang
|
dfd80b3aa2
|
recog into decoders, format code
|
3 years ago |
Hui Zhang
|
a4e27da64b
|
decoder with ctc prefix score
|
3 years ago |
Hui Zhang
|
37790b76d0
|
argparse load conf from file
|
3 years ago |
Hui Zhang
|
9092594fab
|
resume not increase epoch and step, before_train do it
|
3 years ago |
Hui Zhang
|
8b45c3e65e
|
refactor trainer.py and rm ueseless dir setup code
|
3 years ago |
Hui Zhang
|
b291c69386
|
add checkpoint to save parameters
|
3 years ago |
Hui Zhang
|
f4f2d6f07e
|
print deps module version
|
3 years ago |
huangyuxin
|
9c37d10992
|
optimize the log
|
3 years ago |
huangyuxin
|
4a1d3ad9b3
|
fix the bug of parallel using condition
|
3 years ago |
Hui Zhang
|
c6e8a33b73
|
fix set_device; more utils; args.opts support multi same name
|
3 years ago |
Hui Zhang
|
913b2300c3
|
nprocs 0 for cpu, other for gpu
|
3 years ago |
Hui Zhang
|
98b15eda05
|
batch WaveDataset
|
3 years ago |
Hui Zhang
|
bab29b94f1
|
fix train log
|
3 years ago |
Hui Zhang
|
56e55c2171
|
not save ckpt when except, since resume train will increase epoch and step
|
3 years ago |
Hui Zhang
|
431106b986
|
fix bugs
|
3 years ago |
Hui Zhang
|
676d160dfa
|
more resume ckpt info
|
3 years ago |
Hui Zhang
|
daaa72a606
|
resuem train with epoch and iteration increase
|
3 years ago |
Hui Zhang
|
7775abd727
|
fix prof switch
|
3 years ago |
Hui Zhang
|
9fb349f935
|
fix benchmark cli
|
3 years ago |
Hui Zhang
|
054e099b28
|
format
|
3 years ago |
Hui Zhang
|
0e91d26ae3
|
fix log; add report to trainer
|
3 years ago |
Hui Zhang
|
6de20de3f8
|
rename reporter.scope to ObsScope
|
3 years ago |
Hui Zhang
|
576e94da04
|
log interval 1 when benchmark
|
3 years ago |
Hui Zhang
|
cda6ca8323
|
add benchmark flags, and logic
|
3 years ago |
Hui Zhang
|
a997b5a61c
|
rename ckpt suffix to np
|
3 years ago |
Hui Zhang
|
3a5258f6a0
|
lr and opt param will restore from ckpt, so we do not set lr manully
|
3 years ago |
Hui Zhang
|
7907319288
|
fix profiler
|
3 years ago |
Hui Zhang
|
5fdda953b9
|
add op profiling
|
3 years ago |
Hui Zhang
|
3843372958
|
u2 with chianer updater
|
3 years ago |
Hui Zhang
|
f0470e0529
|
not dump all grad info, since slow down train porcess
|
3 years ago |
Hui Zhang
|
65e666378d
|
add timer info
|
3 years ago |
Hui Zhang
|
c29ee83a46
|
add timer
|
3 years ago |
Hui Zhang
|
673cc4a081
|
seed all with log; and format
|
3 years ago |
Hui Zhang
|
14ac780658
|
fix trainer when dataloader not using batch_sampler
|
3 years ago |
Hui Zhang
|
cfdca210ff
|
chaner style updater
|
3 years ago |
huangyuxin
|
b3d27e4bbb
|
merge the develop
|
3 years ago |
huangyuxin
|
2d3b2aed05
|
add seed in argparse
|
3 years ago |
Hui Zhang
|
561d5cf085
|
refactor feature, dict and argument for new config format
|
3 years ago |
Hui Zhang
|
ab23eb5710
|
fix for kaldi
|
3 years ago |
huangyuxin
|
e1a2cfef7f
|
fix the resume bug: the lr is not related to iteration, but epoch
|
3 years ago |
Hui Zhang
|
c4da9a7f3a
|
filter key by class signature, no print tensor
|
3 years ago |
Hui Zhang
|
3912c255ef
|
support noam lr and opt
|
3 years ago |
Hui Zhang
|
1cd4d4bf83
|
fix tiny conf and refactor optimizer and scheduler
|
3 years ago |
Haoxin Ma
|
08b6213bc8
|
fix private function
|
4 years ago |
Haoxin Ma
|
6d92417edd
|
optimize the function
|
4 years ago |
Haoxin Ma
|
16210c0587
|
fix bug
|
4 years ago |
Haoxin Ma
|
91e70a2857
|
multi gpus
|
4 years ago |
Haoxin Ma
|
8af2eb073a
|
revise config
|
4 years ago |