comparison deep/stacked_dae/stacked_dae.py @ 208:acb942530923

Completely rewrote my series module, now based on HDF5 and PyTables (in a separate directory called 'tables_series' for retrocompatibility of running code). Minor (inconsequential) changes to stacked_dae.
author fsavard
date Fri, 05 Mar 2010 18:07:20 -0500
parents e1f5f66dd7dd
children 7b4507295eba
comparison
equal deleted inserted replaced
205:10a801240bfc 208:acb942530923
138 # note : we sum over the size of a datapoint; if we are using minibatches, 138 # note : we sum over the size of a datapoint; if we are using minibatches,
139 # L will be a vector, with one entry per example in minibatch 139 # L will be a vector, with one entry per example in minibatch
140 #self.L = - T.sum( self.x*T.log(self.z) + (1-self.x)*T.log(1-self.z), axis=1 ) 140 #self.L = - T.sum( self.x*T.log(self.z) + (1-self.x)*T.log(1-self.z), axis=1 )
141 #self.L = binary_cross_entropy(target=self.x, output=self.z, sum_axis=1) 141 #self.L = binary_cross_entropy(target=self.x, output=self.z, sum_axis=1)
142 142
143 # bypassing z to avoid running to log(0)
144 #self.z_a = T.dot(self.y, self.W_prime) + self.b_prime)
145 #self.L = -T.sum( self.x * (T.log(1)-T.log(1+T.exp(-self.z_a))) \
146 # + (1.0-self.x) * (T.log(1)-T.log(1+T.exp(-self.z_a))), axis=1 )
147
143 # I added this epsilon to avoid getting log(0) and 1/0 in grad 148 # I added this epsilon to avoid getting log(0) and 1/0 in grad
144 # This means conceptually that there'd be no probability of 0, but that 149 # This means conceptually that there'd be no probability of 0, but that
145 # doesn't seem to me as important (maybe I'm wrong?). 150 # doesn't seem to me as important (maybe I'm wrong?).
146 eps = 0.00000001 151 eps = 0.00000001
147 eps_1 = 1-eps 152 eps_1 = 1-eps