Mercurial > pylearn
diff doc/v2_planning/datalearn.txt @ 1365:049b99f4b323
reply to OD
author | Razvan Pascanu <r.pascanu@gmail.com> |
---|---|
date | Fri, 12 Nov 2010 11:49:00 -0500 |
parents | 01157763c2d7 |
children | f945ed016c68 |
line wrap: on
line diff
--- a/doc/v2_planning/datalearn.txt Fri Nov 12 11:36:30 2010 -0500 +++ b/doc/v2_planning/datalearn.txt Fri Nov 12 11:49:00 2010 -0500 @@ -227,6 +227,32 @@ for sample in graph(my_dataset): ... +RP answers: right. I was actually constructing this stupid example in my mind when +you would do like : + i1 = f1(data) + i2 = f2(i1) + i3 = f3(i2) + ... + iN = fN(iN-1) + and then you would say .. wait I want to do this on new_data as well. Oh no, I + have to copy the entire block or whatever. That is so annoying. But actually you + could just write: + + def my_f(data): + i1 = f1(data) + ... + return iN + + and then just use that function which is what you pointed out. I agree I'm + not sure anymore on the point that I was trying to make. Is like if you are + a lazy programmer, and you write everything without functions, you can + argue that you like more (2) because you only pass the dataset at the end + and not at the beginning. But if (1) would have the replace function this + argument will fail. Though this only stands if you like don't want to make + a function out of your pipeline that takes the dataset as input, which now + that I think about it is pretty stupid not to do. Sorry for that. + + - in approach (1) the initial dataset object (the one that loads the data) decides if you will use shared variables and indices to deal with the dataset or if you will use ``theano.tensor.matrix`` and not the user( at