Mercurial > pylearn
comparison gradient_learner.py @ 14:5ede27026e05
Working on gradient_based_learner
author | bengioy@bengiomac.local |
---|---|
date | Wed, 26 Mar 2008 22:56:13 -0400 |
parents | 633453635d51 |
children | 266c68cb6136 |
comparison
equal
deleted
inserted
replaced
13:633453635d51 | 14:5ede27026e05 |
---|---|
5 from compile import Function | 5 from compile import Function |
6 from gradient_based_optimizer import * | 6 from gradient_based_optimizer import * |
7 | 7 |
8 class GradientLearner(Learner): | 8 class GradientLearner(Learner): |
9 """ | 9 """ |
10 Generic Learner for gradient-based optimization of a training criterion | 10 Base class for gradient-based optimization of a training criterion |
11 that can consist in two parts, an additive part over examples, and | 11 that can consist in two parts, an additive part over examples, and |
12 an example-independent part (usually called the regularizer). | 12 an example-independent part (usually called the regularizer). |
13 The user provides a Theano formula that maps the fields of a training example | 13 The user provides a Theano formula that maps the fields of a training example |
14 and parameters to output fields (for the use function), one of which must be a cost | 14 and parameters to output fields (for the use function), one of which must be a cost |
15 that is the training criterion to be minimized. The user also provides | 15 that is the training criterion to be minimized. Subclasses implement |
16 a GradientBasedOptimizer that implements the optimization strategy. | 16 a training strategy that uses the function to compute gradients and |
17 The inputs, parameters, outputs and lists of Theano tensors, | 17 to compute outputs in the update method. |
18 The inputs, parameters, and outputs are lists of Theano tensors, | |
18 while the example_wise_cost and regularization_term are Theano tensors. | 19 while the example_wise_cost and regularization_term are Theano tensors. |
19 The user can specify a regularization coefficient that multiplies the regularization term. | 20 The user can specify a regularization coefficient that multiplies the regularization term. |
20 The training algorithm looks for parameters that minimize | 21 The training algorithm looks for parameters that minimize |
21 regularization_coefficienet * regularization_term(parameters) + | 22 regularization_coefficienet * regularization_term(parameters) + |
22 sum_{inputs in training_set} example_wise_cost(inputs,parameters) | 23 sum_{inputs in training_set} example_wise_cost(inputs,parameters) |
23 i.e. the regularization_term should not depend on the inputs, only on the parameters. | 24 i.e. the regularization_term should not depend on the inputs, only on the parameters. |
24 The learned function can map a subset of inputs to a subset of outputs (as long as the inputs subset | 25 The learned function can map a subset of inputs to a subset of outputs (as long as the inputs subset |
25 includes all the inputs required in the Theano expression for the selected outputs). | 26 includes all the inputs required in the Theano expression for the selected outputs). |
27 It is assumed that all the inputs are provided in the training set, but | |
28 not necessarily when using the learned function. | |
26 """ | 29 """ |
27 def __init__(self, inputs, parameters, outputs, example_wise_cost, regularization_term, | 30 def __init__(self, inputs, parameters, outputs, example_wise_cost, regularization_term, |
28 gradient_based_optimizer=StochasticGradientDescent(), regularization_coefficient = astensor(1.0)): | 31 gradient_based_optimizer=StochasticGradientDescent(), regularization_coefficient = astensor(1.0)): |
29 self.inputs = inputs | 32 self.inputs = inputs |
30 self.outputs = outputs | 33 self.outputs = outputs |
33 self.regularization_term = regularization_term | 36 self.regularization_term = regularization_term |
34 self.gradient_based_optimizer = gradient_based_optimizer | 37 self.gradient_based_optimizer = gradient_based_optimizer |
35 self.regularization_coefficient = regularization_coefficient | 38 self.regularization_coefficient = regularization_coefficient |
36 self.parameters_example_wise_gradient = gradient.grad(example_wise_cost, parameters) | 39 self.parameters_example_wise_gradient = gradient.grad(example_wise_cost, parameters) |
37 self.parameters_regularization_gradient = gradient.grad(self.regularization_coefficient * regularization, parameters) | 40 self.parameters_regularization_gradient = gradient.grad(self.regularization_coefficient * regularization, parameters) |
41 if example_wise_cost not in outputs: | |
42 outputs.append(example_wise_cost) | |
43 if regularization_term not in outputs: | |
44 outputs.append(regularization_term) | |
45 self.example_wise_gradient_fn = Function(inputs + parameters, | |
46 [self.parameters_example_wise_gradient + self.parameters_regularization_gradient]) | |
47 self.use_functions = {frozenset([input.name for input in inputs]) : Function(inputs, outputs)} | |
38 | 48 |
39 # def update(self,training_set): | 49 def update(self,training_set): |
40 | 50 |