Mercurial > pylearn
annotate cost.py @ 524:317a052f9b14
better main, allow to debug in a debugger.
author | Frederic Bastien <bastienf@iro.umontreal.ca> |
---|---|
date | Fri, 14 Nov 2008 16:46:03 -0500 |
parents | f13847478c6d |
children |
rev | line source |
---|---|
413 | 1 """ |
2 Cost functions. | |
439 | 3 |
4 @note: All of these functions return one cost per example. So it is your | |
5 job to perform a tensor.sum over the individual example losses. | |
484
3daabc7f94ff
Added Yoshua's explanation
Joseph Turian <turian@gmail.com>
parents:
451
diff
changeset
|
6 |
496
f13847478c6d
A few more ideas, in comments
Joseph Turian <turian@gmail.com>
parents:
487
diff
changeset
|
7 @todo: Make a Cost class, with a particular contract. |
f13847478c6d
A few more ideas, in comments
Joseph Turian <turian@gmail.com>
parents:
487
diff
changeset
|
8 |
484
3daabc7f94ff
Added Yoshua's explanation
Joseph Turian <turian@gmail.com>
parents:
451
diff
changeset
|
9 @todo: It would be nice to implement a hinge loss, with a particular margin. |
413 | 10 """ |
11 | |
415 | 12 import theano.tensor as T |
451 | 13 from xlogx import xlogx |
415 | 14 |
413 | 15 def quadratic(target, output, axis=1): |
487 | 16 return T.mean(T.sqr(target - output), axis=axis) |
413 | 17 |
18 def cross_entropy(target, output, axis=1): | |
448 | 19 """ |
20 @todo: This is essentially duplicated as nnet_ops.binary_crossentropy | |
449 | 21 @warning: OUTPUT and TARGET are reversed in nnet_ops.binary_crossentropy |
448 | 22 """ |
434
0f366ecb11ee
log2->log in cost
Olivier Breuleux <breuleuo@iro.umontreal.ca>
parents:
415
diff
changeset
|
23 return -T.mean(target * T.log(output) + (1 - target) * T.log(1 - output), axis=axis) |
451 | 24 |
25 def KL_divergence(target, output): | |
26 """ | |
27 @note: We do not compute the mean, because if target and output have | |
28 different shapes then the result will be garbled. | |
29 """ | |
30 return -(target * T.log(output) + (1 - target) * T.log(1 - output)) \ | |
31 + (xlogx(target) + xlogx(1 - target)) | |
32 # return cross_entropy(target, output, axis) - cross_entropy(target, target, axis) |