# HG changeset patch # User Frederic Bastien # Date 1286295962 14400 # Node ID e5b7a7913329ce4f4b739f282b79ada9cb855de7 # Parent d5e536338b690b57eea1a40617a842d1639c1b5d fix rst error. diff -r d5e536338b69 -r e5b7a7913329 doc/v2_planning/arch_FB.txt --- a/doc/v2_planning/arch_FB.txt Tue Oct 05 09:57:35 2010 -0400 +++ b/doc/v2_planning/arch_FB.txt Tue Oct 05 12:26:02 2010 -0400 @@ -14,10 +14,10 @@ * Select the hyper parameter search space with `jobman sqlschedules` * Dispatch the jobs with dbidispatch -* *Manually*(fixable) reset jobs status to START. +* *Manually* (fixable) reset jobs status to START. * I started it, but I will change the syntax to make it more generic. * *Manually* relaunch crashed jobs. -* *Manually*(fixable) analyse/visualise the result. (We need to start those meeting at some point) +* *Manually* (fixable) analyse/visualise the result. (We need to start those meeting at some point) Example MLP+cross validataion ----------------------------- @@ -39,7 +39,7 @@ * *Jobman Extension* We can extend jobman to handle dependency between jobs. * Proposed syntax: -.. code-block:: +.. code-block:: bash jobman sqlschedule p0={{}} ... -- p1={{}} ... -- p2=... @@ -58,7 +58,7 @@ * *Jobman Policy* All change to the db should be doable by jobman command. * *Manually* relaunch crashed jobs. -* *Manually*(fixable) analyse/visualise the result. +* *Manually* (fixable) analyse/visualise the result. * Those tools need to understand the concept of job phase or be agnostic of that. diff -r d5e536338b69 -r e5b7a7913329 doc/v2_planning/committees.txt --- a/doc/v2_planning/committees.txt Tue Oct 05 09:57:35 2010 -0400 +++ b/doc/v2_planning/committees.txt Tue Oct 05 12:26:02 2010 -0400 @@ -1,4 +1,4 @@ -List of committees and their members (leader marked with a *): +List of committees and their members (leader marked with a \*): * Existing Python ML libraries investigation: GD, DWF, IG, DE * Dataset interface: DE*, OB, OD, AB, PV diff -r d5e536338b69 -r e5b7a7913329 doc/v2_planning/existing_python_ml_libraries.txt --- a/doc/v2_planning/existing_python_ml_libraries.txt Tue Oct 05 09:57:35 2010 -0400 +++ b/doc/v2_planning/existing_python_ml_libraries.txt Tue Oct 05 12:26:02 2010 -0400 @@ -101,26 +101,27 @@ libraries we should definitely be interested in, such as libsvm (because it is well-established) and others that get state of the art performance or are good for extremely large datasets, etc. -milk: - k-means - svm's with arbitrary python types for kernel arguments -pybrain: - lstm -mlpy: - feature selection -mdp: - ica - LLE -scikit.learn: - lasso - nearest neighbor - isomap - various metrics - mean shift - cross validation - LDA - HMMs -Yet Another Python Graph Library: - graph similarity functions that could be useful if we want to -learn with graphs as data +* milk: + * k-means + * svm's with arbitrary python types for kernel arguments +* pybrain: + * lstm +* mlpy: + * feature selection +* mdp: + * ica + * LLE +* scikit.learn: + * lasso + * nearest neighbor + * isomap + * various metrics + * mean shift + * cross validation + * LDA + * HMMs +* Yet Another Python Graph Library: + * graph similarity functions that could be useful if we want to + learn with graphs as data + diff -r d5e536338b69 -r e5b7a7913329 doc/v2_planning/requirements.txt --- a/doc/v2_planning/requirements.txt Tue Oct 05 09:57:35 2010 -0400 +++ b/doc/v2_planning/requirements.txt Tue Oct 05 12:26:02 2010 -0400 @@ -134,10 +134,10 @@ R15. If you see the library as a driver that controls several components ( and we argue that any approach can be seen like this), the driver should always : - - be serializable - - respond to internal interrupts ("checkpoints") - - respond to external interrupts ( timeout) - - async interrupts ( eg. SIGTERM) + - be serializable + - respond to internal interrupts ("checkpoints") + - respond to external interrupts ( timeout) + - async interrupts ( eg. SIGTERM) R16 Cognitive load should be minimal (debatable requirement) Notes : Is hard to actually appreciate cognitive load, so this should be