Mercurial > pylearn
comparison dataset.py @ 146:8173e196e291
Trying to make CacheDataSet work
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Mon, 12 May 2008 15:50:28 -0400 |
parents | ceae4de18981 |
children | 39bb21348fdf |
comparison
equal
deleted
inserted
replaced
144:ceae4de18981 | 146:8173e196e291 |
---|---|
1048 if cache_all_upon_construction: | 1048 if cache_all_upon_construction: |
1049 # this potentially brings all the source examples | 1049 # this potentially brings all the source examples |
1050 # into memory at once, which may be too much | 1050 # into memory at once, which may be too much |
1051 # the work could possibly be done by minibatches | 1051 # the work could possibly be done by minibatches |
1052 # that are as large as possible but no more than what memory allows. | 1052 # that are as large as possible but no more than what memory allows. |
1053 self.cached_examples = zip(*source_dataset.minibatches(minibatch_size=len(source_dataset)).__iter__().next()) | 1053 fields_values = source_dataset.minibatches(minibatch_size=len(source_dataset)).__iter__().next() |
1054 self.cached_examples = zip(*fields_values) | |
1054 else: | 1055 else: |
1055 self.cached_examples = [] | 1056 self.cached_examples = [] |
1056 | 1057 |
1057 self.fieldNames = source_dataset.fieldNames | 1058 self.fieldNames = source_dataset.fieldNames |
1058 self.hasFields = source_dataset.hasFields | 1059 self.hasFields = source_dataset.hasFields |