changeset 1156:f2105a06201c

optimization: Proposal to get closer to each other the Theano and Numpy interfaces
author Olivier Delalleau <delallea@iro>
date Fri, 17 Sep 2010 11:21:15 -0400
parents b70a1fcb7b4f
children aea510b71386 531e77fb67f2
files doc/v2_planning/optimization.txt
diffstat 1 files changed, 44 insertions(+), 11 deletions(-) [+]
line wrap: on
line diff
--- a/doc/v2_planning/optimization.txt	Fri Sep 17 16:09:02 2010 +0300
+++ b/doc/v2_planning/optimization.txt	Fri Sep 17 11:21:15 2010 -0400
@@ -46,17 +46,8 @@
 
 
 
-Proposal for API
-================
-
-See api_optimization.txt.
-
-OD asks: Do we really need a different file? If yes, maybe create a subdirectory
-to be able to easily find all files related to optimization?
-
-JB replies: Yoshua's orders.
-
-
+Discussion
+==========
 
 OD asks: Could it be more convenient for x0 to be a list?
  
@@ -92,3 +83,45 @@
 
 OD replies and asks: Partly. Do we really need a non-iterative interface?
 
+OD: I wish we could get closer to each other the Theano and Numpy interfaces.
+It would be nice if we could do something like:
+
+    # Theano version.
+    updates = sgd([p], gradients=[g], stop=stop, step_size=.1)
+    sgd_step = theano.function([input_var, target_var], [], updates=updates)
+    while not stop.value:
+        input, target = training_iter.next()
+        sgd_step(input, target)
+
+    # Numpy version (you can replace *.value by regular numpy arrays).
+    sgd_step = sgd([p.value], gradients=g_func, stop=stop.value, step_size=.1)
+    while not stop.value:
+        input, target = training_iter.next()
+        sgd_step(input, target)
+
+where sgd would look something like:
+
+    class sgd(...):
+        def __init__(self, parameters, cost=None, gradients=None, stop=None,
+                     step_size=None):
+            # Allow for extra arguments to be provided in self.__call__, that
+            # are forwarded to the underlying gradients function.
+            self.gradients = lambda *lst, **kw: gradients(*(parameters + lst),
+                                                          **kw)
+            ...
+
+        def __call__(*lst, **kw):
+            grads = self.gradients(*lst, **kw)
+            for param, grad in izip(self.parameters, grads):
+                param -= self.step_size * grad
+
+Then a wrapper to provide a scipy-like interface could be:
+
+    def minimize(x0, f, df, algo, **kw):
+        stop = numpy.array(0, dtype=numpy.int8)
+        algo_step = eval(algo)([x0], cost=f, gradients=lambda x: (df(x), ),
+                               stop=stop, **kw)
+        while not stop:
+            algo_step()
+
+