view doc/v2_planning/optimization.txt @ 1088:e254065e7fd7

File for the new committee neural networks
author Razvan Pascanu <r.pascanu@gmail.com>
date Sat, 11 Sep 2010 21:07:06 -0400
parents 2cf3ad953bf9
children 7c5dc11c850a
line wrap: on
line source

=========================
Optimization for Learning
=========================

Members: Bergstra, Lamblin, Delalleau, Glorot, Breuleux, Bordes
Leader: Bergstra



Initial Writeup by James
=========================================



Previous work - scikits, openopt, scipy  provide function optimization
algorithms.  These are not currently GPU-enabled but may be in the future.


IS PREVIOUS WORK SUFFICIENT?
--------------------------------

In many cases it is (I used it for sparse coding, and it was ok).

These packages provide batch optimization, whereas we typically need online
optimization.

It can be faster (to run) and more convenient (to implement) to have
optimization algorithms as Theano update expressions.


What optimization algorithms do we want/need?
---------------------------------------------

 - sgd 
 - sgd + momentum
 - sgd with annealing schedule
 - TONGA
 - James Marten's Hessian-free
 - Conjugate gradients, batch and (large) mini-batch [that is also what Marten's thing does]

Do we need anything to make batch algos work better with Pylearn things?
 - conjugate methods? yes
 - L-BFGS? maybe, when needed





Proposal for API
================

See api_optimization.txt.

OD: Do we really need a different file? If yes, maybe create a subdirectory to
    be able to easily find all files related to optimization?