Mercurial > pylearn
changeset 1251:70ca63c05672
comment on OD's reply
author | Razvan Pascanu <r.pascanu@gmail.com> |
---|---|
date | Thu, 23 Sep 2010 13:44:50 -0400 |
parents | ab1db1837e98 |
children | 4a1339682c8f |
files | doc/v2_planning/arch_src/plugin_JB_comments_YB.txt |
diffstat | 1 files changed, 7 insertions(+), 0 deletions(-) [+] |
line wrap: on
line diff
--- a/doc/v2_planning/arch_src/plugin_JB_comments_YB.txt Thu Sep 23 13:36:04 2010 -0400 +++ b/doc/v2_planning/arch_src/plugin_JB_comments_YB.txt Thu Sep 23 13:44:50 2010 -0400 @@ -105,3 +105,10 @@ I am not convinced about is that we should also use it to write a standard serial machine learning algorithm (e.g. DBN training with fixed hyper-parameters). + +RP replies : What do you understand by writing down a DBN. I believe the +structure and so on ( selecting the optimizers) shouldn't be done using this +approach. You will start using this syntax to do early stopping, to decide the +order of pre-training the layers. In my view you get something like +pretrain_layer1, pretrain_layer2, finetune_one_step and then starting using +James framework. Are you thinking in the same terms ?