Mercurial > pylearn
comparison doc/v2_planning/requirements.txt @ 1096:2bbc294fa5ac
requirements: Added a use case
author | Olivier Delalleau <delallea@iro> |
---|---|
date | Mon, 13 Sep 2010 09:38:26 -0400 |
parents | a65598681620 |
children | 4eda3f52ebef |
comparison
equal
deleted
inserted
replaced
1095:520fcaa45692 | 1096:2bbc294fa5ac |
---|---|
40 ----- | 40 ----- |
41 | 41 |
42 R1. reproduce previous work (our own and others') | 42 R1. reproduce previous work (our own and others') |
43 | 43 |
44 R2. explore MLA variants by swapping components (e.g. optimization algo, dataset, | 44 R2. explore MLA variants by swapping components (e.g. optimization algo, dataset, |
45 hyper-parameters). | 45 hyper-parameters) |
46 | 46 |
47 R3. analyze experimental results (e.g. plotting training curves, finding best | 47 R3. analyze experimental results (e.g. plotting training curves, finding best |
48 models, marginalizing across hyper-parameter choices) | 48 models, marginalizing across hyper-parameter choices) |
49 | 49 |
50 R4. disseminate (or serve as platform for disseminating) our own published algorithms | 50 R4. disseminate (or serve as platform for disseminating) our own published algorithms |
56 random search) | 56 random search) |
57 | 57 |
58 R7. provide implementations of standard pre-processing algorithms (e.g. PCA, | 58 R7. provide implementations of standard pre-processing algorithms (e.g. PCA, |
59 stemming, Mel-scale spectrograms, GIST features, etc.) | 59 stemming, Mel-scale spectrograms, GIST features, etc.) |
60 | 60 |
61 R8. provide high performance suitable for large-scale experiments, | 61 R8. provide high performance suitable for large-scale experiments |
62 | 62 |
63 R9. be able to use the most efficient algorithms in special case combinations of | 63 R9. be able to use the most efficient algorithms in special case combinations of |
64 learning algorithm components (e.g. when there is a fast k-fold validation | 64 learning algorithm components (e.g. when there is a fast k-fold validation |
65 algorithm for a particular model family, the library should not require users | 65 algorithm for a particular model family, the library should not require users |
66 to rewrite their standard k-fold validation script to use it) | 66 to rewrite their standard k-fold validation script to use it) |
67 | 67 |
68 R10. support experiments on a variety of datasets (e.g. movies, images, text, | 68 R10. support experiments on a variety of datasets (e.g. movies, images, text, |
69 sound, reinforcement learning?) | 69 sound, reinforcement learning?) |
70 | 70 |
71 R11. support efficient computations on datasets larger than RAM and GPU memory | 71 R11. support efficient computations on datasets larger than RAM and GPU memory |
72 | 72 |
73 R12. support infinite datasets (i.e. generated on the fly) | 73 R12. support infinite datasets (i.e. generated on the fly) |
74 | 74 |
75 | 75 R13. from a given evaluation experimental setup, be able to save a model that |
76 can be used "in production" (e.g. say you try many combinations of | |
77 preprocessing, models and associated hyper-parameters, and want to easily be | |
78 able to recover the full "processing pipeline" that performs best, to be | |
79 used on future "real" test data) | |
76 | 80 |
77 Basic Design Approach | 81 Basic Design Approach |
78 ===================== | 82 ===================== |
79 | 83 |
80 An ability to drive parallel computations is essential in addressing [R6,R8]. | 84 An ability to drive parallel computations is essential in addressing [R6,R8]. |