# HG changeset patch # User Dumitru Erhan # Date 1275426403 25200 # Node ID 07bc0ca8d2461660d2de793446f15c0b15eb8b9c # Parent c778d20ab6f807359979da4de6a54b5738249cdc added paragraph comparing "our" self-taught learning with "theirs" diff -r c778d20ab6f8 -r 07bc0ca8d246 writeup/nips2010_submission.tex --- a/writeup/nips2010_submission.tex Tue Jun 01 16:06:32 2010 -0400 +++ b/writeup/nips2010_submission.tex Tue Jun 01 14:06:43 2010 -0700 @@ -688,6 +688,16 @@ it was very significant for the SDA (from +13\% to +27\% relative change). %\end{itemize} +In the original self-taught learning framework~\citep{RainaR2007}, the +out-of-sample examples were used as a source of unsupervised data, and +experiments showed its positive effects in a \emph{limited labeled data} +scenario. However, many of the results by \citet{RainaR2007} (who used a +shallow, sparse coding approach) suggest that the relative gain of self-taught +learning diminishes as the number of labeled examples increases, (essentially, +a ``diminishing returns'' scenario occurs). We note that, for deep +architectures, our experiments show that such a positive effect is accomplished +even in a scenario with a \emph{very large number of labeled examples}. + Why would deep learners benefit more from the self-taught learning framework? The key idea is that the lower layers of the predictor compute a hierarchy of features that can be shared across tasks or across variants of the