changeset 1252:4a1339682c8f

Reply to RP
author Olivier Delalleau <delallea@iro>
date Thu, 23 Sep 2010 16:15:58 -0400
parents 70ca63c05672
children 826d78f0135f
files doc/v2_planning/arch_src/plugin_JB_comments_YB.txt
diffstat 1 files changed, 5 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
--- a/doc/v2_planning/arch_src/plugin_JB_comments_YB.txt	Thu Sep 23 13:44:50 2010 -0400
+++ b/doc/v2_planning/arch_src/plugin_JB_comments_YB.txt	Thu Sep 23 16:15:58 2010 -0400
@@ -112,3 +112,8 @@
 order of pre-training the layers. In my view you get something like
 pretrain_layer1, pretrain_layer2, finetune_one_step and then starting using
 James framework. Are you thinking in the same terms ? 
+
+OD replies: Actually I wasn't thinking of using it at all inside a DBN's code.
+I forgot early stopping for each layer's training though, and it's true it may
+be useful to take advantage of some generic mechanism there... but I wouldn't
+use James' framework for it.