Mercurial > ift6266
changeset 22:cb47cbc95a21
I fixed a bug in the computation of L1 and L2 regularizations
author | Razvan Pascanu <r.pascanu@gmail.com> |
---|---|
date | Fri, 29 Jan 2010 11:01:39 -0500 |
parents | afdd41db8152 |
children | 442789c94b27 |
files | scripts/deepmlp.py |
diffstat | 1 files changed, 2 insertions(+), 2 deletions(-) [+] |
line wrap: on
line diff
--- a/scripts/deepmlp.py Thu Jan 28 23:03:44 2010 -0600 +++ b/scripts/deepmlp.py Fri Jan 29 11:01:39 2010 -0500 @@ -102,8 +102,8 @@ # L1 norm ; one regularization option is to enforce L1 norm to # be small - self.L1=abs(self.W[i]).sum() - self.L2_sqr=abs(self.W[i]).sum() + self.L1=abs(self.W[0]).sum() + self.L2_sqr=abs(self.W[0]).sum() for i in range(1,n_layer+1): self.L1 += abs(self.W[i]).sum() # square of L2 norm ; one regularization option is to enforce