Mercurial > pylearn
comparison doc/v2_planning/api_optimization.txt @ 1069:16ea3e5c5a7a
api_optimization: Couple questions
author | Olivier Delalleau <delallea@iro> |
---|---|
date | Fri, 10 Sep 2010 10:28:34 -0400 |
parents | 2bbc464d6ed0 |
children | 153cf820a975 |
comparison
equal
deleted
inserted
replaced
1068:9fe0f0755b03 | 1069:16ea3e5c5a7a |
---|---|
1 Optimization API | 1 Optimization API |
2 ================ | 2 ================ |
3 | 3 |
4 Members: Bergstra, Lamblin, Dellaleau, Glorot, Breuleux, Bordes | 4 Members: Bergstra, Lamblin, Delalleau, Glorot, Breuleux, Bordes |
5 Leader: Bergstra | 5 Leader: Bergstra |
6 | 6 |
7 | 7 |
8 Description | 8 Description |
9 ----------- | 9 ----------- |
92 | 92 |
93 :param kwargs: passed through to `opt_algo` | 93 :param kwargs: passed through to `opt_algo` |
94 | 94 |
95 """ | 95 """ |
96 | 96 |
97 OD: Could it be more convenient for x0 to be a list? | |
97 | 98 |
99 OD: Why make a difference between iterative and one-shot versions? A one-shot | |
100 algorithm can be seen as an iterative one that stops after its first | |
101 iteration. The difference I see between the two interfaces proposed here | |
102 is mostly that one relies on Theano while the other one does not, but | |
103 hopefully a non-Theano one can be created by simply wrapping around the | |
104 Theano one. | |
98 | 105 |