Mercurial > pylearn
view doc/v2_planning/requirements.txt @ 1185:4ea46ef9822a
small fix to make the API_optimization show on the web.
author | Frederic Bastien <nouiz@nouiz.org> |
---|---|
date | Fri, 17 Sep 2010 16:59:08 -0400 |
parents | 1f5465622394 |
children | 7d34edde029d |
line wrap: on
line source
============ Requirements ============ Application Requirements ======================== Terminology and Abbreviations: ------------------------------ MLA - machine learning algorithm learning problem - a machine learning application typically characterized by a dataset (possibly dataset folds) one or more functions to be learned from the data, and one or more metrics to evaluate those functions. Learning problems are the benchmarks for empirical model comparison. n. of - number of SGD - stochastic gradient descent Users: ------ - New masters and PhD students in the lab should be able to quickly move into 'production' mode without having to reinvent the wheel. - Students in the two ML classes, able to play with the library to explore new ML variants. This means some APIs (e.g. Experiment level) must be really well documented and conceptually simple. - Researchers outside the lab (who might study and experiment with our algorithms) - Partners outside the lab (e.g. Bell, Ubisoft) with closed-source commercial projects. Uses: ----- R1. reproduce previous work (our own and others') R2. explore MLA variants by swapping components (e.g. optimization algo, dataset, hyper-parameters) R3. analyze experimental results (e.g. plotting training curves, finding best models, marginalizing across hyper-parameter choices) R4. disseminate (or serve as platform for disseminating) our own published algorithms R5. provide implementations of common MLA components (e.g. classifiers, datasets, optimization algorithms, meta-learning algorithms) R6. drive large scale parallizable computations (e.g. grid search, bagging, random search) R7. provide implementations of standard pre-processing algorithms (e.g. PCA, stemming, Mel-scale spectrograms, GIST features, etc.) R8. provide high performance suitable for large-scale experiments R9. be able to use the most efficient algorithms in special case combinations of learning algorithm components (e.g. when there is a fast k-fold validation algorithm for a particular model family, the library should not require users to rewrite their standard k-fold validation script to use it) R10. support experiments on a variety of datasets (e.g. movies, images, text, sound, reinforcement learning?) R11. support efficient computations on datasets larger than RAM and GPU memory R12. support infinite datasets (i.e. generated on the fly) R13. apply trained models "in production". - e.g. say you try many combinations of preprocessing, models and associated hyper-parameters, and want to easily be able to recover the full "processing pipeline" that performs best, and use it on real/test data later. OD comments: Note that R9 and R13 may conflict with each other. Some optimizations performed by R9 may modify the input "symbolic graph" in such a way that extracting the required components for "production purpose" (R13) could be made more difficult (or even impossible). Imagine for instance that the graph is modified to take advantage of the fact that k-fold validation can be performed efficiently internally by some specific algorithm. Then it may not be obvious anymore how to remove the k-fold split in the saved model you want to use in production.