Mercurial > pylearn
changeset 1124:0f184b5e7a3f
YB: comment on minibatches for dataset.txt
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Wed, 15 Sep 2010 10:25:35 -0400 |
parents | 1a1c0c3adcca |
children | 5387666d49b4 |
files | doc/v2_planning/dataset.txt |
diffstat | 1 files changed, 5 insertions(+), 0 deletions(-) [+] |
line wrap: on
line diff
--- a/doc/v2_planning/dataset.txt Wed Sep 15 09:42:11 2010 -0400 +++ b/doc/v2_planning/dataset.txt Wed Sep 15 10:25:35 2010 -0400 @@ -264,6 +264,11 @@ use numpy arrays (for numeric data) or lists (for anything else) to store mini-batches' data. So I vote for 'no'. +YB: I agree that a mini-batch should definitely be safely assumed +to fit in memory. That makes it at least in principle semantically +different from a dataset. But barring that restriction, it might +share of the properties of a dataset. + A dataset is a learner ~~~~~~~~~~~~~~~~~~~~~~