Mercurial > ift6266
comparison scripts/deepmlp.py @ 22:cb47cbc95a21
I fixed a bug in the computation of L1 and L2 regularizations
author | Razvan Pascanu <r.pascanu@gmail.com> |
---|---|
date | Fri, 29 Jan 2010 11:01:39 -0500 |
parents | afdd41db8152 |
children |
comparison
equal
deleted
inserted
replaced
21:afdd41db8152 | 22:cb47cbc95a21 |
---|---|
100 # symbolic form | 100 # symbolic form |
101 self.y_pred = T.argmax( self.p_y_given_x, axis =1) | 101 self.y_pred = T.argmax( self.p_y_given_x, axis =1) |
102 | 102 |
103 # L1 norm ; one regularization option is to enforce L1 norm to | 103 # L1 norm ; one regularization option is to enforce L1 norm to |
104 # be small | 104 # be small |
105 self.L1=abs(self.W[i]).sum() | 105 self.L1=abs(self.W[0]).sum() |
106 self.L2_sqr=abs(self.W[i]).sum() | 106 self.L2_sqr=abs(self.W[0]).sum() |
107 for i in range(1,n_layer+1): | 107 for i in range(1,n_layer+1): |
108 self.L1 += abs(self.W[i]).sum() | 108 self.L1 += abs(self.W[i]).sum() |
109 # square of L2 norm ; one regularization option is to enforce | 109 # square of L2 norm ; one regularization option is to enforce |
110 # square of L2 norm to be small | 110 # square of L2 norm to be small |
111 for i in range(n_layer+1): | 111 for i in range(n_layer+1): |