Mercurial > pylearn
diff doc/v2_planning/dataset.txt @ 1124:0f184b5e7a3f
YB: comment on minibatches for dataset.txt
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Wed, 15 Sep 2010 10:25:35 -0400 |
parents | 27d0ef195e1d |
children | 7207f86a661f |
line wrap: on
line diff
--- a/doc/v2_planning/dataset.txt Wed Sep 15 09:42:11 2010 -0400 +++ b/doc/v2_planning/dataset.txt Wed Sep 15 10:25:35 2010 -0400 @@ -264,6 +264,11 @@ use numpy arrays (for numeric data) or lists (for anything else) to store mini-batches' data. So I vote for 'no'. +YB: I agree that a mini-batch should definitely be safely assumed +to fit in memory. That makes it at least in principle semantically +different from a dataset. But barring that restriction, it might +share of the properties of a dataset. + A dataset is a learner ~~~~~~~~~~~~~~~~~~~~~~