Mercurial > pylearn
comparison doc/v2_planning/arch_src/plugin_JB_comments_YB.txt @ 1251:70ca63c05672
comment on OD's reply
author | Razvan Pascanu <r.pascanu@gmail.com> |
---|---|
date | Thu, 23 Sep 2010 13:44:50 -0400 |
parents | ab1db1837e98 |
children | 4a1339682c8f |
comparison
equal
deleted
inserted
replaced
1250:ab1db1837e98 | 1251:70ca63c05672 |
---|---|
103 OD replies: I can see such a framework being useful for high-level experiment | 103 OD replies: I can see such a framework being useful for high-level experiment |
104 design (the "big picture", or how to plug different components together). What | 104 design (the "big picture", or how to plug different components together). What |
105 I am not convinced about is that we should also use it to write a standard | 105 I am not convinced about is that we should also use it to write a standard |
106 serial machine learning algorithm (e.g. DBN training with fixed | 106 serial machine learning algorithm (e.g. DBN training with fixed |
107 hyper-parameters). | 107 hyper-parameters). |
108 | |
109 RP replies : What do you understand by writing down a DBN. I believe the | |
110 structure and so on ( selecting the optimizers) shouldn't be done using this | |
111 approach. You will start using this syntax to do early stopping, to decide the | |
112 order of pre-training the layers. In my view you get something like | |
113 pretrain_layer1, pretrain_layer2, finetune_one_step and then starting using | |
114 James framework. Are you thinking in the same terms ? |