Mercurial > pylearn
view doc/v2_planning/learner.txt @ 1043:3f528656855b
v2planning learner.txt - updated API recommendation
author | James Bergstra <bergstrj@iro.umontreal.ca> |
---|---|
date | Wed, 08 Sep 2010 11:33:33 -0400 |
parents | 38cc6e075d9b |
children | 3b1fd599bafd |
line wrap: on
line source
Comittee: AB, PL, GM, IG, RP, NB, PV Leader: ? Discussion of Function Specification for Learner Types ====================================================== In its most abstract form, a learner is an object with the following semantics: * A learner has named hyper-parameters that control how it learns (these can be viewed as options of the constructor, or might be set directly by a user) * A learner also has an internal state that depends on what it has learned. * A learner reads and produces data, so the definition of learner is intimately linked to the definition of dataset (and task). * A learner has one or more 'train' or 'adapt' functions by which it is given a sample of data (typically either the whole training set, or a mini-batch, which contains as a special case a single 'example'). Learners interface with datasets in order to obtain data. These functions cause the learner to change its internal state and take advantage to some extent of the data provided. The 'train' function should take charge of completely exploiting the dataset, as specified per the hyper-parameters, so that it would typically be called only once. An 'adapt' function is meant for learners that can operate in an 'online' setting where data continually arrive and the control loop (when to stop) is to be managed outside of it. For most intents and purposes, the 'train' function could also handle the 'online' case by providing the controlled iterations over the dataset (which would then be seen as a stream of examples). * learner.train(dataset) * learner.adapt(data) * Different types of learners can then exploit their internal state in order to perform various computations after training is completed, or in the middle of training, e.g., * y=learner.predict(x) for learners that see (x,y) pairs during training and predict y given x, or for learners that see only x's and learn a transformation of it (i.e. feature extraction). Here and below, x and y are tensor-like objects whose first index iterates over particular examples in a batch or minibatch of examples. * p=learner.probability(examples) p=learner.log_probability(examples) for learners that can estimate probability density or probability functions, note that example could be a pair (x,y) for learners that expect each example to represent such a pair. The second form is provided in case the example is high-dimensional and computations in the log-domain are numerically preferable. The first dimension of examples or of x and y is an index over a minibatch or a dataset. * p=learner.free_energy(x) for learners that can estimate a log unnormalized probability; the output has the same length as the input. * c=learner.costs(examples) returns a matrix of costs (one row per example, i.e., again the output has the same length as the input), the first column of which represents the cost whose expectation we wish to minimize over new samples from the unknown underlying data distribution. Some learners may be able to handle x's and y's that contain missing values. * For convenience, some of these operations could be bundled, e.g. * [prediction,costs] = learner.predict_and_adapt((x,y)) * Some learners could include in their internal state not only what they have learned but some information about recently seen examples that conditions the expected distribution of upcoming examples. In that case, they might be used, e.g. in an online setting as follows: for (x,y) in data_stream: [prediction,costs]=learner.predict((x,y)) accumulate_statistics(prediction,costs) * In some cases, each example is itself a (possibly variable-size) sequence or other variable-size object (e.g. an image, or a video) James's idea for Learner Interface =================================== Theory: ------- Think about the unfolding of a learning algorithm as exploring a path in a vast directed graph. There are some source nodes, which are potential initial conditions for the learning algorithm. At any node, there are a number of outgoing labeled edges that represent distinct directions of exploration: like "allocate a model with N hidden units", or "set the l1 weight decay on such-and-such units to 0.1" or "adapt for T iterations" or "refresh the GPU dataset memory with the next batch of data". Not all nodes have the same outgoing edge labels. The dataset, model, and optimization algorithm implementations may each have their various hyper-parameters with various restrictions on what values they can take, and when they can be changed. Every move in this graph incurs some storage and computational expense, and explores the graph. Learners typically engage in goal-directed exploration of this graph - for example to find the node with the best validation-set performance given a certain computational budget. We might often be interested in the best node found. The predict(), log_probability(), free_energy() etc correspond to costs that we can measure at any particular node (at some computational expense) to see how we are doing in our exploration. Many semantically distinct components come into the definition of this graph: the model (e.g. DAA) the dataset (e.g. an online one), the inference and learning strategy. I'm not sure what to call this graph than an 'experiment graph'... so I'll go with that for now. Use Cases ---------- Early stopping ~~~~~~~~~~~~~~ Early stopping can be implemented as a learner that progresses along a particular kind of edge (e.g. "train more") until a stopping criterion (in terms of a cost computed from nodes along the path) is met. Grid Search ~~~~~~~~~~~ Grid search is a learner policy that can be implemented in an experiment graph where all paths have the form: ( "set param 0 to X", "set param 1 to Y", ... , "set param N to Z", adapt, [early stop...], test) It would explore all paths of this form and then return the best node. Stagewise learning of DBNs combined with early stopping and grid search ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This would be a learner that is effective for experiment graphs that reflect the greedy-stagewise optimization of DBNs. Boosting ~~~~~~~~ Given an ExperimentGraph that permits re-weighting of examples, it is straightforward to write a meta-ExperimentGraph around it that implements AdaBoost. A meta-meta-ExperimentGraph around that that does early-stopping would complete the picture and make a useful boosting implementation. Using External Hyper-Parameter Optimization Software ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ TODO: use-case - show how we could use the optimizer from http://www.cs.ubc.ca/labs/beta/Projects/ParamILS/ Implementation Details / API ---------------------------- Learner ~~~~~~~ An object that allows us to explore the graph discussed above. Specifically, it represents an explored node in that graph. def active_instructions() """ Return a list/set of Instruction instances (see below) that the Learner is prepared to handle. """ def copy(), deepcopy() """ Learners should be serializable """ To make the implementation easier, I found it was helpful to introduce a string-valued `fsa_state` member attribute and associate methods to these states. That made it syntactically easy to build relatively complex finite-state transition graphs to describe which instructions were active at which times in the life-cycle of a learner. Instruction ~~~~~~~~~~~ An object that represents a potential edge in the graph discussed above. It is an operation that a learner can perform. arg_types """a list of Type object (see below) indicating what args are required by execute""" def execute(learner, args, kwargs): """ Perform some operation on the learner (follow an edge in the graph discussed above) and modify the learner in-place. Calling execute 'moves' the learner from one node in the graph along an edge. To have the old learner as well, it must be copied prior to calling execute(). """ def expense(learner, args, kwargs, resource_type='CPUtime'): """ Return an estimated cost of performing this instruction (calling execute), in time, space, number of computers, disk requierement, etc. """ Type ~~~~ An object that describes a parameter domain for a call to Instruction.execute. It is not necessary that a Type specifies exactly which arguments are legal, but it should `include` all legal arguments, and exclude as many illegal ones as possible. def includes(value): """return True if value is a legal argument""" To make things a bit more practical, there are some Type subclasses like Int, Float, Str, ImageDataset, SgdOptimizer, that include additional attributes (e.g. min, max, default) so that automatic graph exploration algorithms can generate legal arguments with reasonable efficiency. The proxy pattern is a powerful way to combine learners. Especially when proxy Learner instances also introduce Proxy Instruction classes. For example, it is straightforward to implement a hyper-learner by implementing a Learner with another learner (sub-learner) as a member attribute. The hyper-learner makes some modifications to the instruction_set() return value of the sub-learner, typically to introduce more powerful instructions and hide simpler ones. It is less straightforward, but consistent with the design to implement a Learner that encompasses job management. Such a learner would retain the semantics of the instruction_set of the sub-learner, but would replace the Instruction objects themselves with Instructions that arranged for remote procedure calls (e.g. jobman, multiprocessing, bqtools, etc.) Such a learner would replace synchronous instructions (return on completion) with asynchronous ones (return after scheduling) and the active instruction set would also change asynchronously, but neither of these things is inconsistent with the Learner API. TODO ~~~~ I feel like something is missing from the API - and that is an interface to the graph structure discussed above. The nodes in this graph are natural places to store meta-information for visualization, statistics-gathering etc. But none of the APIs above corresponds to the graph itself. In other words, there is no API through which to attach information to nodes. It is not good to say that the Learner instance *is* the node because (a) learner instances change during graph exploration and (b) learner instances are big, and we don't want to have to keep a whole saved model just to attach meta-info e.g. validation score. Choosing this API spills over into other committees, so we should get their feedback about how to resolve it.