# HG changeset patch # User Olivier Delalleau # Date 1285272958 14400 # Node ID 4a1339682c8f86618d793719e31422cbf5f8d8fa # Parent 70ca63c056721819ce4adb7a6410da03dd470163 Reply to RP diff -r 70ca63c05672 -r 4a1339682c8f doc/v2_planning/arch_src/plugin_JB_comments_YB.txt --- a/doc/v2_planning/arch_src/plugin_JB_comments_YB.txt Thu Sep 23 13:44:50 2010 -0400 +++ b/doc/v2_planning/arch_src/plugin_JB_comments_YB.txt Thu Sep 23 16:15:58 2010 -0400 @@ -112,3 +112,8 @@ order of pre-training the layers. In my view you get something like pretrain_layer1, pretrain_layer2, finetune_one_step and then starting using James framework. Are you thinking in the same terms ? + +OD replies: Actually I wasn't thinking of using it at all inside a DBN's code. +I forgot early stopping for each layer's training though, and it's true it may +be useful to take advantage of some generic mechanism there... but I wouldn't +use James' framework for it.