Mercurial > pylearn
diff doc/v2_planning/dataset.txt @ 1109:29b48deb6a84
reply/comment regarding the GPU and datasets
author | Razvan Pascanu <r.pascanu@gmail.com> |
---|---|
date | Tue, 14 Sep 2010 09:01:16 -0400 |
parents | 546bd0ccb0e4 |
children | 4797a4cb73e1 |
line wrap: on
line diff
--- a/doc/v2_planning/dataset.txt Mon Sep 13 23:55:04 2010 -0400 +++ b/doc/v2_planning/dataset.txt Tue Sep 14 09:01:16 2010 -0400 @@ -343,3 +343,17 @@ shared variable? Why wouldn't the learner just create this shared variable internally and copy into it the data provided by the dataset? +RP replies: Sure, the learner could take care of all this. Note though that the +learner should take care to divide the dataset into chunks that fit in the +GPU memory ( in case of a large dataset) and then take care of updating the +shared variables acording to the current chunk. Personally I feel like all +this data division, management and so on should be done by the dataset. +It feels more natural that way. For example assume you have a dataset that +is composed of a time series and some static data ( carre-tech heart beat +data is a good example). The static data is small enough so that you could +always store on the GPU, and you would only need to split the time series. +For the learner to do this ( since it gets the same interface from any +dataset object) would be like and if <this case> then, while for the +dataset is just a different class. But I'm happy to have all this GPU stuff +send to the learner as well if everybody else believe that is better. +