Mercurial > pylearn
comparison doc/v2_planning/dataset.txt @ 1124:0f184b5e7a3f
YB: comment on minibatches for dataset.txt
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Wed, 15 Sep 2010 10:25:35 -0400 |
parents | 27d0ef195e1d |
children | 7207f86a661f |
comparison
equal
deleted
inserted
replaced
1123:1a1c0c3adcca | 1124:0f184b5e7a3f |
---|---|
262 our idea of what 'mini' means) Hopefully the answer to that last question is | 262 our idea of what 'mini' means) Hopefully the answer to that last question is |
263 no, as I think it would definitely keep things simpler, since we could simply | 263 no, as I think it would definitely keep things simpler, since we could simply |
264 use numpy arrays (for numeric data) or lists (for anything else) to store | 264 use numpy arrays (for numeric data) or lists (for anything else) to store |
265 mini-batches' data. So I vote for 'no'. | 265 mini-batches' data. So I vote for 'no'. |
266 | 266 |
267 YB: I agree that a mini-batch should definitely be safely assumed | |
268 to fit in memory. That makes it at least in principle semantically | |
269 different from a dataset. But barring that restriction, it might | |
270 share of the properties of a dataset. | |
271 | |
267 A dataset is a learner | 272 A dataset is a learner |
268 ~~~~~~~~~~~~~~~~~~~~~~ | 273 ~~~~~~~~~~~~~~~~~~~~~~ |
269 | 274 |
270 OD: (this is hopefully a clearer re-write of the original version from | 275 OD: (this is hopefully a clearer re-write of the original version from |
271 r7e6e77d50eeb, which I was not happy with). | 276 r7e6e77d50eeb, which I was not happy with). |