diff writeup/nips2010_submission.tex @ 537:47894d0ecbde

merge
author Dumitru Erhan <dumitru.erhan@gmail.com>
date Tue, 01 Jun 2010 18:28:43 -0700
parents 5157a5830125 22d5cd82d5f0
children f0ee2212ea7c
line wrap: on
line diff
--- a/writeup/nips2010_submission.tex	Tue Jun 01 18:28:09 2010 -0700
+++ b/writeup/nips2010_submission.tex	Tue Jun 01 18:28:43 2010 -0700
@@ -86,12 +86,13 @@
 Self-taught learning~\citep{RainaR2007} is a paradigm that combines principles
 of semi-supervised and multi-task learning: the learner can exploit examples
 that are unlabeled and/or come from a distribution different from the target
-distribution, e.g., from other classes that those of interest. Whereas
-it has already been shown that deep learners can clearly take advantage of
-unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small}
-and multi-task learning, not much has been done yet to explore the impact
+distribution, e.g., from other classes that those of interest. 
+It has already been shown that deep learners can clearly take advantage of
+unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small},
+but more needs to be done to explore the impact
 of {\em out-of-distribution} examples and of the multi-task setting
-(but see~\citep{CollobertR2008}). In particular the {\em relative
+(one exception is~\citep{CollobertR2008}, but using very different kinds
+of learning algorithms). In particular the {\em relative
 advantage} of deep learning for this settings has not been evaluated.
 The hypothesis explored here is that a deep hierarchy of features
 may be better able to provide sharing of statistical strength
@@ -513,8 +514,8 @@
 Here we chose to use the Denoising
 Auto-Encoder~\citep{VincentPLarochelleH2008} as the building block for
 these deep hierarchies of features, as it is very simple to train and
-teach (see Figure~\ref{fig:da}, as well as 
-tutorial and code at {\tt http://deeplearning.net/tutorial}), 
+explain (see Figure~\ref{fig:da}, as well as 
+tutorial and code there: {\tt http://deeplearning.net/tutorial}), 
 provides immediate and efficient inference, and yielded results
 comparable or better than RBMs in series of experiments
 \citep{VincentPLarochelleH2008}. During training, a Denoising